00:00:00.001 Started by upstream project "autotest-per-patch" build number 122838 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.114 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.217 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.229 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.240 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:07.240 > git config core.sparsecheckout # timeout=10 00:00:07.251 > git read-tree -mu HEAD # timeout=10 00:00:07.266 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:07.285 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:07.285 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:07.376 [Pipeline] Start of Pipeline 00:00:07.392 [Pipeline] library 00:00:07.394 Loading library shm_lib@master 00:00:07.394 Library shm_lib@master is cached. Copying from home. 00:00:07.414 [Pipeline] node 00:00:07.428 Running on WFP3 in /var/jenkins/workspace/crypto-phy-autotest 00:00:07.430 [Pipeline] { 00:00:07.440 [Pipeline] catchError 00:00:07.442 [Pipeline] { 00:00:07.456 [Pipeline] wrap 00:00:07.466 [Pipeline] { 00:00:07.475 [Pipeline] stage 00:00:07.477 [Pipeline] { (Prologue) 00:00:07.668 [Pipeline] sh 00:00:07.949 + logger -p user.info -t JENKINS-CI 00:00:07.970 [Pipeline] echo 00:00:07.972 Node: WFP3 00:00:07.981 [Pipeline] sh 00:00:08.280 [Pipeline] setCustomBuildProperty 00:00:08.294 [Pipeline] echo 00:00:08.296 Cleanup processes 00:00:08.301 [Pipeline] sh 00:00:08.583 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.583 3867214 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.597 [Pipeline] sh 00:00:08.886 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.886 ++ grep -v 'sudo pgrep' 00:00:08.886 ++ awk '{print $1}' 00:00:08.886 + sudo kill -9 00:00:08.886 + true 00:00:08.901 [Pipeline] cleanWs 00:00:08.911 [WS-CLEANUP] Deleting project workspace... 00:00:08.911 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.918 [WS-CLEANUP] done 00:00:08.922 [Pipeline] setCustomBuildProperty 00:00:08.938 [Pipeline] sh 00:00:09.218 + sudo git config --global --replace-all safe.directory '*' 00:00:09.296 [Pipeline] nodesByLabel 00:00:09.298 Found a total of 1 nodes with the 'sorcerer' label 00:00:09.308 [Pipeline] httpRequest 00:00:09.313 HttpMethod: GET 00:00:09.313 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:09.316 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:09.332 Response Code: HTTP/1.1 200 OK 00:00:09.333 Success: Status code 200 is in the accepted range: 200,404 00:00:09.333 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:12.325 [Pipeline] sh 00:00:12.606 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:12.622 [Pipeline] httpRequest 00:00:12.626 HttpMethod: GET 00:00:12.627 URL: http://10.211.164.101/packages/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:12.628 Sending request to url: http://10.211.164.101/packages/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:12.645 Response Code: HTTP/1.1 200 OK 00:00:12.645 Success: Status code 200 is in the accepted range: 200,404 00:00:12.646 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:35.926 [Pipeline] sh 00:00:36.209 + tar --no-same-owner -xf spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:40.410 [Pipeline] sh 00:00:40.687 + git -C spdk log --oneline -n5 00:00:40.687 2b14ffc34 nvmf: method for getting DH-HMAC-CHAP keys 00:00:40.687 091d58775 nvme: make spdk_nvme_dhchap_calculate() public 00:00:40.687 2c8f92576 nvmf/auth: send DH-HMAC-CHAP_challenge message 00:00:40.687 c06b0c79b nvmf: make allow_any_host its own byte 00:00:40.687 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:00:40.702 [Pipeline] } 00:00:40.715 [Pipeline] // stage 00:00:40.725 [Pipeline] stage 00:00:40.728 [Pipeline] { (Prepare) 00:00:40.745 [Pipeline] writeFile 00:00:40.761 [Pipeline] sh 00:00:41.042 + logger -p user.info -t JENKINS-CI 00:00:41.054 [Pipeline] sh 00:00:41.337 + logger -p user.info -t JENKINS-CI 00:00:41.347 [Pipeline] sh 00:00:41.624 + cat autorun-spdk.conf 00:00:41.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.624 SPDK_TEST_BLOCKDEV=1 00:00:41.624 SPDK_TEST_ISAL=1 00:00:41.624 SPDK_TEST_CRYPTO=1 00:00:41.624 SPDK_TEST_REDUCE=1 00:00:41.624 SPDK_TEST_VBDEV_COMPRESS=1 00:00:41.624 SPDK_RUN_UBSAN=1 00:00:41.631 RUN_NIGHTLY=0 00:00:41.638 [Pipeline] readFile 00:00:41.661 [Pipeline] withEnv 00:00:41.662 [Pipeline] { 00:00:41.673 [Pipeline] sh 00:00:41.954 + set -ex 00:00:41.954 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:41.954 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:41.954 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.954 ++ SPDK_TEST_BLOCKDEV=1 00:00:41.954 ++ SPDK_TEST_ISAL=1 00:00:41.954 ++ SPDK_TEST_CRYPTO=1 00:00:41.954 ++ SPDK_TEST_REDUCE=1 00:00:41.954 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:41.954 ++ SPDK_RUN_UBSAN=1 00:00:41.954 ++ RUN_NIGHTLY=0 00:00:41.954 + case $SPDK_TEST_NVMF_NICS in 00:00:41.954 + DRIVERS= 00:00:41.954 + [[ -n '' ]] 00:00:41.954 + exit 0 00:00:41.964 [Pipeline] } 00:00:41.985 [Pipeline] // withEnv 00:00:41.991 [Pipeline] } 00:00:42.009 [Pipeline] // stage 00:00:42.019 [Pipeline] catchError 00:00:42.021 [Pipeline] { 00:00:42.036 [Pipeline] timeout 00:00:42.036 Timeout set to expire in 40 min 00:00:42.037 [Pipeline] { 00:00:42.051 [Pipeline] stage 00:00:42.052 [Pipeline] { (Tests) 00:00:42.063 [Pipeline] sh 00:00:42.341 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:42.341 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:42.341 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:42.341 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:42.341 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:42.341 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:42.341 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:42.341 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:42.341 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:42.341 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:42.341 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:42.341 + source /etc/os-release 00:00:42.341 ++ NAME='Fedora Linux' 00:00:42.341 ++ VERSION='38 (Cloud Edition)' 00:00:42.341 ++ ID=fedora 00:00:42.341 ++ VERSION_ID=38 00:00:42.341 ++ VERSION_CODENAME= 00:00:42.341 ++ PLATFORM_ID=platform:f38 00:00:42.341 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:42.341 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:42.341 ++ LOGO=fedora-logo-icon 00:00:42.341 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:42.341 ++ HOME_URL=https://fedoraproject.org/ 00:00:42.341 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:42.341 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:42.341 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:42.341 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:42.341 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:42.341 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:42.341 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:42.341 ++ SUPPORT_END=2024-05-14 00:00:42.341 ++ VARIANT='Cloud Edition' 00:00:42.341 ++ VARIANT_ID=cloud 00:00:42.341 + uname -a 00:00:42.341 Linux spdk-wfp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:00:42.341 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:45.660 Hugepages 00:00:45.660 node hugesize free / total 00:00:45.660 node0 1048576kB 0 / 0 00:00:45.660 node0 2048kB 0 / 0 00:00:45.660 node1 1048576kB 0 / 0 00:00:45.660 node1 2048kB 0 / 0 00:00:45.660 00:00:45.660 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.660 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:45.660 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:45.660 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:00:45.660 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme0 nvme0n1 nvme0n2 00:00:45.660 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:45.660 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:45.660 + rm -f /tmp/spdk-ld-path 00:00:45.660 + source autorun-spdk.conf 00:00:45.660 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.660 ++ SPDK_TEST_BLOCKDEV=1 00:00:45.660 ++ SPDK_TEST_ISAL=1 00:00:45.660 ++ SPDK_TEST_CRYPTO=1 00:00:45.660 ++ SPDK_TEST_REDUCE=1 00:00:45.660 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:45.660 ++ SPDK_RUN_UBSAN=1 00:00:45.660 ++ RUN_NIGHTLY=0 00:00:45.660 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:45.660 + [[ -n '' ]] 00:00:45.660 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:45.660 + for M in /var/spdk/build-*-manifest.txt 00:00:45.660 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:45.660 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:45.660 + for M in /var/spdk/build-*-manifest.txt 00:00:45.660 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:45.660 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:45.660 ++ uname 00:00:45.660 + [[ Linux == \L\i\n\u\x ]] 00:00:45.660 + sudo dmesg -T 00:00:45.660 + sudo dmesg --clear 00:00:45.660 + dmesg_pid=3868268 00:00:45.660 + [[ Fedora Linux == FreeBSD ]] 00:00:45.660 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.660 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.660 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:45.660 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:45.660 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:45.660 + [[ -x /usr/src/fio-static/fio ]] 00:00:45.660 + sudo dmesg -Tw 00:00:45.660 + export FIO_BIN=/usr/src/fio-static/fio 00:00:45.660 + FIO_BIN=/usr/src/fio-static/fio 00:00:45.660 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:45.660 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:45.660 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:45.660 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.660 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.660 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:45.660 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.660 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.660 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:45.661 Test configuration: 00:00:45.661 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.661 SPDK_TEST_BLOCKDEV=1 00:00:45.661 SPDK_TEST_ISAL=1 00:00:45.661 SPDK_TEST_CRYPTO=1 00:00:45.661 SPDK_TEST_REDUCE=1 00:00:45.661 SPDK_TEST_VBDEV_COMPRESS=1 00:00:45.661 SPDK_RUN_UBSAN=1 00:00:45.661 RUN_NIGHTLY=0 02:55:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:45.661 02:55:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:45.661 02:55:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:45.661 02:55:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:45.661 02:55:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.661 02:55:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.661 02:55:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.661 02:55:16 -- paths/export.sh@5 -- $ export PATH 00:00:45.661 02:55:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.661 02:55:16 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:45.661 02:55:16 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:45.661 02:55:16 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715734516.XXXXXX 00:00:45.661 02:55:16 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715734516.aIchrw 00:00:45.661 02:55:16 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:45.661 02:55:16 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:45.661 02:55:16 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:45.661 02:55:16 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:45.661 02:55:16 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:45.661 02:55:16 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:45.661 02:55:16 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:45.661 02:55:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.661 02:55:16 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:45.661 02:55:16 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:45.661 02:55:16 -- pm/common@17 -- $ local monitor 00:00:45.661 02:55:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.661 02:55:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.661 02:55:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.661 02:55:16 -- pm/common@21 -- $ date +%s 00:00:45.661 02:55:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.661 02:55:16 -- pm/common@21 -- $ date +%s 00:00:45.661 02:55:16 -- pm/common@25 -- $ sleep 1 00:00:45.661 02:55:16 -- pm/common@21 -- $ date +%s 00:00:45.661 02:55:16 -- pm/common@21 -- $ date +%s 00:00:45.661 02:55:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734516 00:00:45.661 02:55:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734516 00:00:45.661 02:55:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734516 00:00:45.661 02:55:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734516 00:00:45.661 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734516_collect-vmstat.pm.log 00:00:45.661 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734516_collect-cpu-load.pm.log 00:00:45.661 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734516_collect-cpu-temp.pm.log 00:00:45.920 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734516_collect-bmc-pm.bmc.pm.log 00:00:46.858 02:55:17 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:46.858 02:55:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.858 02:55:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.858 02:55:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:46.858 02:55:17 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.858 Wed May 15 12:55:17 AM UTC 2024 00:00:46.858 02:55:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.858 v24.05-pre-627-g2b14ffc34 00:00:46.858 02:55:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.858 02:55:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.858 02:55:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.858 02:55:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:46.858 02:55:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:46.858 02:55:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.858 ************************************ 00:00:46.858 START TEST ubsan 00:00:46.858 ************************************ 00:00:46.858 02:55:17 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:46.858 using ubsan 00:00:46.858 00:00:46.858 real 0m0.000s 00:00:46.858 user 0m0.000s 00:00:46.858 sys 0m0.000s 00:00:46.858 02:55:17 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:46.858 02:55:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:46.858 ************************************ 00:00:46.858 END TEST ubsan 00:00:46.858 ************************************ 00:00:46.858 02:55:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:46.858 02:55:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:46.858 02:55:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:46.858 02:55:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:46.858 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:46.858 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:47.426 Using 'verbs' RDMA provider 00:01:03.248 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.468 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.468 Creating mk/config.mk...done. 00:01:15.468 Creating mk/cc.flags.mk...done. 00:01:15.468 Type 'make' to build. 00:01:15.468 02:55:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:15.468 02:55:44 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:15.468 02:55:44 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:15.468 02:55:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.468 ************************************ 00:01:15.468 START TEST make 00:01:15.468 ************************************ 00:01:15.468 02:55:44 make -- common/autotest_common.sh@1121 -- $ make -j96 00:01:15.468 make[1]: Nothing to be done for 'all'. 00:01:54.201 The Meson build system 00:01:54.201 Version: 1.3.1 00:01:54.201 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:54.201 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:54.201 Build type: native build 00:01:54.201 Program cat found: YES (/usr/bin/cat) 00:01:54.201 Project name: DPDK 00:01:54.201 Project version: 23.11.0 00:01:54.201 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.201 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.201 Host machine cpu family: x86_64 00:01:54.201 Host machine cpu: x86_64 00:01:54.201 Message: ## Building in Developer Mode ## 00:01:54.201 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.201 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.201 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.201 Program python3 found: YES (/usr/bin/python3) 00:01:54.201 Program cat found: YES (/usr/bin/cat) 00:01:54.201 Compiler for C supports arguments -march=native: YES 00:01:54.201 Checking for size of "void *" : 8 00:01:54.201 Checking for size of "void *" : 8 (cached) 00:01:54.201 Library m found: YES 00:01:54.201 Library numa found: YES 00:01:54.201 Has header "numaif.h" : YES 00:01:54.201 Library fdt found: NO 00:01:54.201 Library execinfo found: NO 00:01:54.201 Has header "execinfo.h" : YES 00:01:54.201 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.201 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.201 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.201 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.201 Run-time dependency openssl found: YES 3.0.9 00:01:54.201 Run-time dependency libpcap found: YES 1.10.4 00:01:54.201 Has header "pcap.h" with dependency libpcap: YES 00:01:54.201 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.201 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.201 Compiler for C supports arguments -Wformat: YES 00:01:54.201 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.201 Compiler for C supports arguments -Wformat-security: NO 00:01:54.201 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.201 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.201 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.201 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.201 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.201 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.201 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.201 Compiler for C supports arguments -Wundef: YES 00:01:54.201 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.201 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.201 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.201 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.201 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.201 Program objdump found: YES (/usr/bin/objdump) 00:01:54.201 Compiler for C supports arguments -mavx512f: YES 00:01:54.201 Checking if "AVX512 checking" compiles: YES 00:01:54.201 Fetching value of define "__SSE4_2__" : 1 00:01:54.201 Fetching value of define "__AES__" : 1 00:01:54.201 Fetching value of define "__AVX__" : 1 00:01:54.201 Fetching value of define "__AVX2__" : 1 00:01:54.201 Fetching value of define "__AVX512BW__" : 1 00:01:54.201 Fetching value of define "__AVX512CD__" : 1 00:01:54.201 Fetching value of define "__AVX512DQ__" : 1 00:01:54.201 Fetching value of define "__AVX512F__" : 1 00:01:54.202 Fetching value of define "__AVX512VL__" : 1 00:01:54.202 Fetching value of define "__PCLMUL__" : 1 00:01:54.202 Fetching value of define "__RDRND__" : 1 00:01:54.202 Fetching value of define "__RDSEED__" : 1 00:01:54.202 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.202 Fetching value of define "__znver1__" : (undefined) 00:01:54.202 Fetching value of define "__znver2__" : (undefined) 00:01:54.202 Fetching value of define "__znver3__" : (undefined) 00:01:54.202 Fetching value of define "__znver4__" : (undefined) 00:01:54.202 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.202 Message: lib/log: Defining dependency "log" 00:01:54.202 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.202 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.202 Checking for function "getentropy" : NO 00:01:54.202 Message: lib/eal: Defining dependency "eal" 00:01:54.202 Message: lib/ring: Defining dependency "ring" 00:01:54.202 Message: lib/rcu: Defining dependency "rcu" 00:01:54.202 Message: lib/mempool: Defining dependency "mempool" 00:01:54.202 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.202 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.202 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.202 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.202 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.202 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.202 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.202 Compiler for C supports arguments -mpclmul: YES 00:01:54.202 Compiler for C supports arguments -maes: YES 00:01:54.202 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.202 Compiler for C supports arguments -mavx512bw: YES 00:01:54.202 Compiler for C supports arguments -mavx512dq: YES 00:01:54.202 Compiler for C supports arguments -mavx512vl: YES 00:01:54.202 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.202 Compiler for C supports arguments -mavx2: YES 00:01:54.202 Compiler for C supports arguments -mavx: YES 00:01:54.202 Message: lib/net: Defining dependency "net" 00:01:54.202 Message: lib/meter: Defining dependency "meter" 00:01:54.202 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.202 Message: lib/pci: Defining dependency "pci" 00:01:54.202 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.202 Message: lib/hash: Defining dependency "hash" 00:01:54.202 Message: lib/timer: Defining dependency "timer" 00:01:54.202 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.202 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.202 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.202 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.202 Message: lib/power: Defining dependency "power" 00:01:54.202 Message: lib/reorder: Defining dependency "reorder" 00:01:54.202 Message: lib/security: Defining dependency "security" 00:01:54.202 Has header "linux/userfaultfd.h" : YES 00:01:54.202 Has header "linux/vduse.h" : YES 00:01:54.202 Message: lib/vhost: Defining dependency "vhost" 00:01:54.202 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.202 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:54.202 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.202 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.202 Compiler for C supports arguments -std=c11: YES 00:01:54.202 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:54.202 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:54.202 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:54.202 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:54.202 Run-time dependency libmlx5 found: YES 1.24.46.0 00:01:54.202 Run-time dependency libibverbs found: YES 1.14.46.0 00:01:54.202 Library mtcr_ul found: NO 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:54.202 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_25000baseCR_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:55.139 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:55.140 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:55.140 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:55.140 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:55.140 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:55.140 Configuring mlx5_autoconf.h using configuration 00:01:55.140 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:55.140 Run-time dependency libcrypto found: YES 3.0.9 00:01:55.140 Library IPSec_MB found: YES 00:01:55.140 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:55.140 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:55.140 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.140 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.140 Library IPSec_MB found: YES 00:01:55.140 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:55.140 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:55.140 Compiler for C supports arguments -std=c11: YES (cached) 00:01:55.140 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:55.140 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:55.140 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:55.140 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:55.140 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:55.140 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:55.140 Library libisal found: NO 00:01:55.140 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:55.140 Compiler for C supports arguments -std=c11: YES (cached) 00:01:55.140 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:55.400 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:55.400 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:55.400 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:55.400 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:55.400 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.400 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.400 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.400 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.400 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.400 Program doxygen found: YES (/usr/bin/doxygen) 00:01:55.400 Configuring doxy-api-html.conf using configuration 00:01:55.400 Configuring doxy-api-man.conf using configuration 00:01:55.400 Program mandb found: YES (/usr/bin/mandb) 00:01:55.400 Program sphinx-build found: NO 00:01:55.400 Configuring rte_build_config.h using configuration 00:01:55.400 Message: 00:01:55.400 ================= 00:01:55.400 Applications Enabled 00:01:55.400 ================= 00:01:55.400 00:01:55.400 apps: 00:01:55.400 00:01:55.400 00:01:55.400 Message: 00:01:55.400 ================= 00:01:55.400 Libraries Enabled 00:01:55.400 ================= 00:01:55.400 00:01:55.400 libs: 00:01:55.400 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.400 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.400 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.400 00:01:55.400 Message: 00:01:55.400 =============== 00:01:55.400 Drivers Enabled 00:01:55.400 =============== 00:01:55.400 00:01:55.400 common: 00:01:55.400 mlx5, qat, 00:01:55.400 bus: 00:01:55.400 auxiliary, pci, vdev, 00:01:55.400 mempool: 00:01:55.400 ring, 00:01:55.400 dma: 00:01:55.400 00:01:55.400 net: 00:01:55.400 00:01:55.400 crypto: 00:01:55.400 ipsec_mb, mlx5, 00:01:55.400 compress: 00:01:55.400 isal, mlx5, 00:01:55.400 vdpa: 00:01:55.400 00:01:55.400 00:01:55.400 Message: 00:01:55.400 ================= 00:01:55.400 Content Skipped 00:01:55.400 ================= 00:01:55.400 00:01:55.400 apps: 00:01:55.400 dumpcap: explicitly disabled via build config 00:01:55.400 graph: explicitly disabled via build config 00:01:55.400 pdump: explicitly disabled via build config 00:01:55.400 proc-info: explicitly disabled via build config 00:01:55.400 test-acl: explicitly disabled via build config 00:01:55.400 test-bbdev: explicitly disabled via build config 00:01:55.400 test-cmdline: explicitly disabled via build config 00:01:55.400 test-compress-perf: explicitly disabled via build config 00:01:55.400 test-crypto-perf: explicitly disabled via build config 00:01:55.400 test-dma-perf: explicitly disabled via build config 00:01:55.400 test-eventdev: explicitly disabled via build config 00:01:55.400 test-fib: explicitly disabled via build config 00:01:55.400 test-flow-perf: explicitly disabled via build config 00:01:55.400 test-gpudev: explicitly disabled via build config 00:01:55.400 test-mldev: explicitly disabled via build config 00:01:55.400 test-pipeline: explicitly disabled via build config 00:01:55.400 test-pmd: explicitly disabled via build config 00:01:55.400 test-regex: explicitly disabled via build config 00:01:55.400 test-sad: explicitly disabled via build config 00:01:55.400 test-security-perf: explicitly disabled via build config 00:01:55.400 00:01:55.400 libs: 00:01:55.400 metrics: explicitly disabled via build config 00:01:55.400 acl: explicitly disabled via build config 00:01:55.400 bbdev: explicitly disabled via build config 00:01:55.400 bitratestats: explicitly disabled via build config 00:01:55.400 bpf: explicitly disabled via build config 00:01:55.400 cfgfile: explicitly disabled via build config 00:01:55.400 distributor: explicitly disabled via build config 00:01:55.400 efd: explicitly disabled via build config 00:01:55.400 eventdev: explicitly disabled via build config 00:01:55.400 dispatcher: explicitly disabled via build config 00:01:55.400 gpudev: explicitly disabled via build config 00:01:55.400 gro: explicitly disabled via build config 00:01:55.400 gso: explicitly disabled via build config 00:01:55.400 ip_frag: explicitly disabled via build config 00:01:55.400 jobstats: explicitly disabled via build config 00:01:55.400 latencystats: explicitly disabled via build config 00:01:55.400 lpm: explicitly disabled via build config 00:01:55.400 member: explicitly disabled via build config 00:01:55.400 pcapng: explicitly disabled via build config 00:01:55.400 rawdev: explicitly disabled via build config 00:01:55.400 regexdev: explicitly disabled via build config 00:01:55.400 mldev: explicitly disabled via build config 00:01:55.400 rib: explicitly disabled via build config 00:01:55.400 sched: explicitly disabled via build config 00:01:55.400 stack: explicitly disabled via build config 00:01:55.400 ipsec: explicitly disabled via build config 00:01:55.400 pdcp: explicitly disabled via build config 00:01:55.400 fib: explicitly disabled via build config 00:01:55.400 port: explicitly disabled via build config 00:01:55.400 pdump: explicitly disabled via build config 00:01:55.400 table: explicitly disabled via build config 00:01:55.400 pipeline: explicitly disabled via build config 00:01:55.400 graph: explicitly disabled via build config 00:01:55.400 node: explicitly disabled via build config 00:01:55.400 00:01:55.400 drivers: 00:01:55.400 common/cpt: not in enabled drivers build config 00:01:55.400 common/dpaax: not in enabled drivers build config 00:01:55.400 common/iavf: not in enabled drivers build config 00:01:55.400 common/idpf: not in enabled drivers build config 00:01:55.400 common/mvep: not in enabled drivers build config 00:01:55.400 common/octeontx: not in enabled drivers build config 00:01:55.400 bus/cdx: not in enabled drivers build config 00:01:55.400 bus/dpaa: not in enabled drivers build config 00:01:55.400 bus/fslmc: not in enabled drivers build config 00:01:55.400 bus/ifpga: not in enabled drivers build config 00:01:55.400 bus/platform: not in enabled drivers build config 00:01:55.400 bus/vmbus: not in enabled drivers build config 00:01:55.400 common/cnxk: not in enabled drivers build config 00:01:55.400 common/nfp: not in enabled drivers build config 00:01:55.400 common/sfc_efx: not in enabled drivers build config 00:01:55.400 mempool/bucket: not in enabled drivers build config 00:01:55.400 mempool/cnxk: not in enabled drivers build config 00:01:55.400 mempool/dpaa: not in enabled drivers build config 00:01:55.400 mempool/dpaa2: not in enabled drivers build config 00:01:55.400 mempool/octeontx: not in enabled drivers build config 00:01:55.400 mempool/stack: not in enabled drivers build config 00:01:55.400 dma/cnxk: not in enabled drivers build config 00:01:55.400 dma/dpaa: not in enabled drivers build config 00:01:55.400 dma/dpaa2: not in enabled drivers build config 00:01:55.400 dma/hisilicon: not in enabled drivers build config 00:01:55.400 dma/idxd: not in enabled drivers build config 00:01:55.400 dma/ioat: not in enabled drivers build config 00:01:55.400 dma/skeleton: not in enabled drivers build config 00:01:55.400 net/af_packet: not in enabled drivers build config 00:01:55.400 net/af_xdp: not in enabled drivers build config 00:01:55.400 net/ark: not in enabled drivers build config 00:01:55.400 net/atlantic: not in enabled drivers build config 00:01:55.400 net/avp: not in enabled drivers build config 00:01:55.400 net/axgbe: not in enabled drivers build config 00:01:55.400 net/bnx2x: not in enabled drivers build config 00:01:55.400 net/bnxt: not in enabled drivers build config 00:01:55.400 net/bonding: not in enabled drivers build config 00:01:55.400 net/cnxk: not in enabled drivers build config 00:01:55.400 net/cpfl: not in enabled drivers build config 00:01:55.400 net/cxgbe: not in enabled drivers build config 00:01:55.400 net/dpaa: not in enabled drivers build config 00:01:55.400 net/dpaa2: not in enabled drivers build config 00:01:55.400 net/e1000: not in enabled drivers build config 00:01:55.400 net/ena: not in enabled drivers build config 00:01:55.400 net/enetc: not in enabled drivers build config 00:01:55.400 net/enetfec: not in enabled drivers build config 00:01:55.400 net/enic: not in enabled drivers build config 00:01:55.400 net/failsafe: not in enabled drivers build config 00:01:55.400 net/fm10k: not in enabled drivers build config 00:01:55.400 net/gve: not in enabled drivers build config 00:01:55.400 net/hinic: not in enabled drivers build config 00:01:55.400 net/hns3: not in enabled drivers build config 00:01:55.400 net/i40e: not in enabled drivers build config 00:01:55.400 net/iavf: not in enabled drivers build config 00:01:55.400 net/ice: not in enabled drivers build config 00:01:55.400 net/idpf: not in enabled drivers build config 00:01:55.400 net/igc: not in enabled drivers build config 00:01:55.400 net/ionic: not in enabled drivers build config 00:01:55.400 net/ipn3ke: not in enabled drivers build config 00:01:55.400 net/ixgbe: not in enabled drivers build config 00:01:55.400 net/mana: not in enabled drivers build config 00:01:55.400 net/memif: not in enabled drivers build config 00:01:55.400 net/mlx4: not in enabled drivers build config 00:01:55.400 net/mlx5: not in enabled drivers build config 00:01:55.400 net/mvneta: not in enabled drivers build config 00:01:55.400 net/mvpp2: not in enabled drivers build config 00:01:55.400 net/netvsc: not in enabled drivers build config 00:01:55.400 net/nfb: not in enabled drivers build config 00:01:55.400 net/nfp: not in enabled drivers build config 00:01:55.400 net/ngbe: not in enabled drivers build config 00:01:55.400 net/null: not in enabled drivers build config 00:01:55.400 net/octeontx: not in enabled drivers build config 00:01:55.400 net/octeon_ep: not in enabled drivers build config 00:01:55.400 net/pcap: not in enabled drivers build config 00:01:55.400 net/pfe: not in enabled drivers build config 00:01:55.400 net/qede: not in enabled drivers build config 00:01:55.400 net/ring: not in enabled drivers build config 00:01:55.400 net/sfc: not in enabled drivers build config 00:01:55.400 net/softnic: not in enabled drivers build config 00:01:55.400 net/tap: not in enabled drivers build config 00:01:55.400 net/thunderx: not in enabled drivers build config 00:01:55.400 net/txgbe: not in enabled drivers build config 00:01:55.400 net/vdev_netvsc: not in enabled drivers build config 00:01:55.400 net/vhost: not in enabled drivers build config 00:01:55.400 net/virtio: not in enabled drivers build config 00:01:55.400 net/vmxnet3: not in enabled drivers build config 00:01:55.401 raw/*: missing internal dependency, "rawdev" 00:01:55.401 crypto/armv8: not in enabled drivers build config 00:01:55.401 crypto/bcmfs: not in enabled drivers build config 00:01:55.401 crypto/caam_jr: not in enabled drivers build config 00:01:55.401 crypto/ccp: not in enabled drivers build config 00:01:55.401 crypto/cnxk: not in enabled drivers build config 00:01:55.401 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.401 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.401 crypto/mvsam: not in enabled drivers build config 00:01:55.401 crypto/nitrox: not in enabled drivers build config 00:01:55.401 crypto/null: not in enabled drivers build config 00:01:55.401 crypto/octeontx: not in enabled drivers build config 00:01:55.401 crypto/openssl: not in enabled drivers build config 00:01:55.401 crypto/scheduler: not in enabled drivers build config 00:01:55.401 crypto/uadk: not in enabled drivers build config 00:01:55.401 crypto/virtio: not in enabled drivers build config 00:01:55.401 compress/octeontx: not in enabled drivers build config 00:01:55.401 compress/zlib: not in enabled drivers build config 00:01:55.401 regex/*: missing internal dependency, "regexdev" 00:01:55.401 ml/*: missing internal dependency, "mldev" 00:01:55.401 vdpa/ifc: not in enabled drivers build config 00:01:55.401 vdpa/mlx5: not in enabled drivers build config 00:01:55.401 vdpa/nfp: not in enabled drivers build config 00:01:55.401 vdpa/sfc: not in enabled drivers build config 00:01:55.401 event/*: missing internal dependency, "eventdev" 00:01:55.401 baseband/*: missing internal dependency, "bbdev" 00:01:55.401 gpu/*: missing internal dependency, "gpudev" 00:01:55.401 00:01:55.401 00:01:55.969 Build targets in project: 115 00:01:55.969 00:01:55.969 DPDK 23.11.0 00:01:55.969 00:01:55.969 User defined options 00:01:55.969 buildtype : debug 00:01:55.969 default_library : shared 00:01:55.969 libdir : lib 00:01:55.969 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:55.969 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:55.969 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:55.969 cpu_instruction_set: native 00:01:55.969 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:55.969 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:55.969 enable_docs : false 00:01:55.969 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:55.969 enable_kmods : false 00:01:55.969 tests : false 00:01:55.969 00:01:55.969 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.543 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.543 [1/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.543 [2/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.543 [3/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.543 [4/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:56.543 [5/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.543 [6/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.543 [7/370] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.543 [8/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.803 [9/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.803 [10/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.803 [11/370] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.803 [12/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.803 [13/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.803 [14/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.803 [15/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.803 [16/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:56.803 [17/370] Linking static target lib/librte_kvargs.a 00:01:56.803 [18/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.803 [19/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.803 [20/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.803 [21/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.803 [22/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.803 [23/370] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.803 [24/370] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.803 [25/370] Linking static target lib/librte_log.a 00:01:56.803 [26/370] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.803 [27/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.803 [28/370] Linking static target lib/librte_pci.a 00:01:56.803 [29/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.803 [30/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.803 [31/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.803 [32/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.069 [33/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.069 [34/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.069 [35/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.069 [36/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.069 [37/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.069 [38/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.334 [39/370] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.334 [40/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.334 [41/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.335 [42/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.335 [43/370] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:57.335 [44/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.335 [45/370] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.335 [46/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.335 [47/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.335 [48/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.335 [49/370] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.335 [50/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.335 [51/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.335 [52/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.335 [53/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.335 [54/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.335 [55/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.335 [56/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.335 [57/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.335 [58/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.335 [59/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.335 [60/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.335 [61/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.335 [62/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.335 [63/370] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.335 [64/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.335 [65/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:57.335 [66/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.335 [67/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.335 [68/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.335 [69/370] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.335 [70/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.335 [71/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.335 [72/370] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:57.335 [73/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.335 [74/370] Linking static target lib/librte_meter.a 00:01:57.335 [75/370] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.335 [76/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.335 [77/370] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:57.335 [78/370] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.335 [79/370] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.335 [80/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.335 [81/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.335 [82/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.335 [83/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.335 [84/370] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.335 [85/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.335 [86/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.335 [87/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.335 [88/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.335 [89/370] Linking static target lib/librte_ring.a 00:01:57.335 [90/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.335 [91/370] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.335 [92/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.335 [93/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.335 [94/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.594 [95/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.594 [96/370] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.594 [97/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.594 [98/370] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.594 [99/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.594 [100/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.594 [101/370] Linking static target lib/librte_telemetry.a 00:01:57.594 [102/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.594 [103/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.594 [104/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.594 [105/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.594 [106/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.594 [107/370] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.594 [108/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:57.594 [109/370] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.594 [110/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.594 [111/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.594 [112/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.594 [113/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.594 [114/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.594 [115/370] Linking static target lib/librte_cmdline.a 00:01:57.594 [116/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:57.594 [117/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.594 [118/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.594 [119/370] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.594 [120/370] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.594 [121/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.594 [122/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.594 [123/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.594 [124/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.594 [125/370] Linking static target lib/librte_rcu.a 00:01:57.594 [126/370] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.594 [127/370] Linking static target lib/librte_mempool.a 00:01:57.594 [128/370] Linking static target lib/librte_net.a 00:01:57.594 [129/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:57.594 [130/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.594 [131/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.912 [132/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.912 [133/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.912 [134/370] Linking static target lib/librte_eal.a 00:01:57.912 [135/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.912 [136/370] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.912 [137/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.912 [138/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.912 [139/370] Linking static target lib/librte_timer.a 00:01:57.912 [140/370] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.912 [141/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.912 [142/370] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.912 [143/370] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.912 [144/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.912 [145/370] Linking static target lib/librte_compressdev.a 00:01:57.912 [146/370] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.912 [147/370] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.912 [148/370] Linking target lib/librte_log.so.24.0 00:01:57.912 [149/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.912 [150/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.912 [151/370] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.912 [152/370] Linking static target lib/librte_mbuf.a 00:01:57.912 [153/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.178 [154/370] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.178 [155/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.178 [156/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:58.178 [157/370] Linking static target lib/librte_dmadev.a 00:01:58.178 [158/370] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.178 [159/370] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.178 [160/370] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.178 [161/370] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.178 [162/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.178 [163/370] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.178 [164/370] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.178 [165/370] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.178 [166/370] Linking static target lib/librte_power.a 00:01:58.178 [167/370] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.178 [168/370] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.178 [169/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:58.178 [170/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:58.178 [171/370] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:58.178 [172/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.178 [173/370] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.178 [174/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.178 [175/370] Linking target lib/librte_kvargs.so.24.0 00:01:58.178 [176/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.178 [177/370] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.178 [178/370] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.439 [179/370] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.439 [180/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.439 [181/370] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.439 [182/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:58.439 [183/370] Linking target lib/librte_telemetry.so.24.0 00:01:58.439 [184/370] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.439 [185/370] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.439 [186/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:58.439 [187/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.439 [188/370] Linking static target lib/librte_reorder.a 00:01:58.439 [189/370] Linking static target lib/librte_hash.a 00:01:58.439 [190/370] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.439 [191/370] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.439 [192/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.439 [193/370] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:58.439 [194/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:58.439 [195/370] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.439 [196/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:58.439 [197/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:58.439 [198/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:58.439 [199/370] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.439 [200/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:58.439 [201/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:58.439 [202/370] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:58.439 [203/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:58.439 [204/370] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.439 [205/370] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.439 [206/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:58.439 [207/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:58.439 [208/370] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.439 [209/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:58.439 [210/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:58.439 [211/370] Compiling C object drivers/librte_bus_auxiliary.so.24.0.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:58.439 [212/370] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:58.439 [213/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:58.439 [214/370] Linking static target lib/librte_security.a 00:01:58.439 [215/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:58.439 [216/370] Linking static target drivers/librte_bus_auxiliary.a 00:01:58.439 [217/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:58.439 [218/370] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.439 [219/370] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.439 [220/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:58.439 [221/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:58.699 [222/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:58.699 [223/370] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.699 [224/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:58.699 [225/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:58.699 [226/370] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.699 [227/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:58.699 [228/370] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.699 [229/370] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:58.699 [230/370] Linking static target drivers/librte_bus_vdev.a 00:01:58.699 [231/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:58.699 [232/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:58.699 [233/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:58.699 [234/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:58.699 [235/370] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.699 [236/370] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.699 [237/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:58.699 [238/370] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:58.699 [239/370] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.699 [240/370] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.699 [241/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:58.699 [242/370] Linking static target drivers/librte_bus_pci.a 00:01:58.699 [243/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:58.699 [244/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:58.699 [245/370] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.699 [246/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.699 [247/370] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.699 [248/370] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.699 [249/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:58.699 [250/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:58.699 [251/370] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:58.699 [252/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:58.699 [253/370] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:58.699 [254/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.699 [255/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:58.699 [256/370] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [257/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:58.959 [258/370] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [259/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:58.959 [260/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.959 [261/370] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [262/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:58.959 [263/370] Linking static target lib/librte_cryptodev.a 00:01:58.959 [264/370] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:58.959 [265/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:58.959 [266/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:58.959 [267/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:58.959 [268/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:58.959 [269/370] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [270/370] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:58.959 [271/370] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:58.959 [272/370] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:58.959 [273/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.959 [274/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:58.959 [275/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:58.959 [276/370] Linking static target lib/librte_ethdev.a 00:01:58.959 [277/370] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.959 [278/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:58.959 [279/370] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:58.959 [280/370] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.959 [281/370] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [282/370] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:58.959 [283/370] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.959 [284/370] Linking static target drivers/librte_mempool_ring.a 00:01:58.959 [285/370] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:58.959 [286/370] Compiling C object drivers/librte_compress_mlx5.so.24.0.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:58.959 [287/370] Linking static target drivers/librte_compress_mlx5.a 00:01:59.218 [288/370] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.218 [289/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:59.218 [290/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:59.218 [291/370] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:59.218 [292/370] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:59.218 [293/370] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:59.218 [294/370] Compiling C object drivers/librte_common_mlx5.so.24.0.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:59.218 [295/370] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:59.218 [296/370] Compiling C object drivers/librte_compress_isal.so.24.0.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:59.218 [297/370] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:59.218 [298/370] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.218 [299/370] Linking static target drivers/librte_common_mlx5.a 00:01:59.218 [300/370] Linking static target drivers/librte_compress_isal.a 00:01:59.218 [301/370] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:59.218 [302/370] Compiling C object drivers/librte_crypto_mlx5.so.24.0.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:59.218 [303/370] Linking static target drivers/librte_crypto_mlx5.a 00:01:59.218 [304/370] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:59.218 [305/370] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.0.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:59.218 [306/370] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:59.476 [307/370] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:59.476 [308/370] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.734 [309/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.992 [310/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:59.992 [311/370] Linking static target drivers/libtmp_rte_common_qat.a 00:02:00.251 [312/370] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:00.251 [313/370] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:00.251 [314/370] Compiling C object drivers/librte_common_qat.so.24.0.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:00.509 [315/370] Linking static target drivers/librte_common_qat.a 00:02:00.768 [316/370] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.026 [317/370] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.026 [318/370] Linking static target lib/librte_vhost.a 00:02:02.930 [319/370] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.866 [320/370] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.152 [321/370] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.720 [322/370] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.720 [323/370] Linking target lib/librte_eal.so.24.0 00:02:07.979 [324/370] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:07.979 [325/370] Linking target lib/librte_pci.so.24.0 00:02:07.979 [326/370] Linking target drivers/librte_bus_vdev.so.24.0 00:02:07.979 [327/370] Linking target lib/librte_ring.so.24.0 00:02:07.979 [328/370] Linking target lib/librte_meter.so.24.0 00:02:07.979 [329/370] Linking target lib/librte_timer.so.24.0 00:02:07.979 [330/370] Linking target lib/librte_dmadev.so.24.0 00:02:07.979 [331/370] Linking target drivers/librte_bus_auxiliary.so.24.0 00:02:07.979 [332/370] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.238 [333/370] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.238 [334/370] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:08.238 [335/370] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:08.238 [336/370] Generating symbol file drivers/librte_bus_auxiliary.so.24.0.p/librte_bus_auxiliary.so.24.0.symbols 00:02:08.238 [337/370] Linking target drivers/librte_bus_pci.so.24.0 00:02:08.238 [338/370] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:08.238 [339/370] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.238 [340/370] Linking target lib/librte_rcu.so.24.0 00:02:08.238 [341/370] Linking target lib/librte_mempool.so.24.0 00:02:08.238 [342/370] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:08.238 [343/370] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:08.238 [344/370] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:08.496 [345/370] Linking target lib/librte_mbuf.so.24.0 00:02:08.496 [346/370] Linking target drivers/librte_mempool_ring.so.24.0 00:02:08.496 [347/370] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:08.496 [348/370] Linking target lib/librte_net.so.24.0 00:02:08.496 [349/370] Linking target lib/librte_reorder.so.24.0 00:02:08.496 [350/370] Linking target lib/librte_compressdev.so.24.0 00:02:08.496 [351/370] Linking target lib/librte_cryptodev.so.24.0 00:02:08.755 [352/370] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:08.755 [353/370] Generating symbol file lib/librte_compressdev.so.24.0.p/librte_compressdev.so.24.0.symbols 00:02:08.755 [354/370] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:08.755 [355/370] Linking target lib/librte_cmdline.so.24.0 00:02:08.755 [356/370] Linking target lib/librte_security.so.24.0 00:02:08.755 [357/370] Linking target lib/librte_hash.so.24.0 00:02:08.755 [358/370] Linking target drivers/librte_compress_isal.so.24.0 00:02:08.755 [359/370] Linking target lib/librte_ethdev.so.24.0 00:02:09.014 [360/370] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:09.014 [361/370] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:09.014 [362/370] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:09.014 [363/370] Linking target lib/librte_power.so.24.0 00:02:09.014 [364/370] Linking target lib/librte_vhost.so.24.0 00:02:09.014 [365/370] Linking target drivers/librte_common_mlx5.so.24.0 00:02:09.273 [366/370] Generating symbol file drivers/librte_common_mlx5.so.24.0.p/librte_common_mlx5.so.24.0.symbols 00:02:09.273 [367/370] Linking target drivers/librte_crypto_ipsec_mb.so.24.0 00:02:09.273 [368/370] Linking target drivers/librte_crypto_mlx5.so.24.0 00:02:09.273 [369/370] Linking target drivers/librte_compress_mlx5.so.24.0 00:02:09.273 [370/370] Linking target drivers/librte_common_qat.so.24.0 00:02:09.273 INFO: autodetecting backend as ninja 00:02:09.273 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:10.648 CC lib/log/log.o 00:02:10.648 CC lib/log/log_flags.o 00:02:10.648 CC lib/log/log_deprecated.o 00:02:10.648 CC lib/ut/ut.o 00:02:10.648 CC lib/ut_mock/mock.o 00:02:10.648 LIB libspdk_ut_mock.a 00:02:10.648 LIB libspdk_log.a 00:02:10.648 LIB libspdk_ut.a 00:02:10.648 SO libspdk_ut_mock.so.6.0 00:02:10.648 SO libspdk_log.so.7.0 00:02:10.648 SO libspdk_ut.so.2.0 00:02:10.648 SYMLINK libspdk_ut_mock.so 00:02:10.907 SYMLINK libspdk_ut.so 00:02:10.907 SYMLINK libspdk_log.so 00:02:11.165 CXX lib/trace_parser/trace.o 00:02:11.165 CC lib/util/base64.o 00:02:11.165 CC lib/util/bit_array.o 00:02:11.165 CC lib/util/cpuset.o 00:02:11.165 CC lib/util/crc16.o 00:02:11.165 CC lib/util/crc32.o 00:02:11.165 CC lib/util/crc32c.o 00:02:11.165 CC lib/util/crc32_ieee.o 00:02:11.165 CC lib/util/crc64.o 00:02:11.165 CC lib/util/dif.o 00:02:11.165 CC lib/util/fd.o 00:02:11.165 CC lib/util/file.o 00:02:11.165 CC lib/util/hexlify.o 00:02:11.165 CC lib/util/iov.o 00:02:11.165 CC lib/dma/dma.o 00:02:11.165 CC lib/util/math.o 00:02:11.165 CC lib/util/pipe.o 00:02:11.165 CC lib/ioat/ioat.o 00:02:11.165 CC lib/util/strerror_tls.o 00:02:11.165 CC lib/util/string.o 00:02:11.165 CC lib/util/uuid.o 00:02:11.165 CC lib/util/fd_group.o 00:02:11.165 CC lib/util/xor.o 00:02:11.165 CC lib/util/zipf.o 00:02:11.165 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.165 CC lib/vfio_user/host/vfio_user.o 00:02:11.424 LIB libspdk_dma.a 00:02:11.424 LIB libspdk_ioat.a 00:02:11.424 SO libspdk_dma.so.4.0 00:02:11.424 SO libspdk_ioat.so.7.0 00:02:11.424 SYMLINK libspdk_dma.so 00:02:11.424 SYMLINK libspdk_ioat.so 00:02:11.424 LIB libspdk_vfio_user.a 00:02:11.683 SO libspdk_vfio_user.so.5.0 00:02:11.683 SYMLINK libspdk_vfio_user.so 00:02:11.683 LIB libspdk_util.a 00:02:11.683 SO libspdk_util.so.9.0 00:02:11.941 SYMLINK libspdk_util.so 00:02:11.941 LIB libspdk_trace_parser.a 00:02:11.941 SO libspdk_trace_parser.so.5.0 00:02:12.200 SYMLINK libspdk_trace_parser.so 00:02:12.200 CC lib/json/json_parse.o 00:02:12.200 CC lib/json/json_util.o 00:02:12.200 CC lib/json/json_write.o 00:02:12.200 CC lib/env_dpdk/env.o 00:02:12.200 CC lib/env_dpdk/memory.o 00:02:12.200 CC lib/env_dpdk/pci.o 00:02:12.200 CC lib/conf/conf.o 00:02:12.200 CC lib/env_dpdk/threads.o 00:02:12.200 CC lib/env_dpdk/init.o 00:02:12.200 CC lib/vmd/vmd.o 00:02:12.200 CC lib/vmd/led.o 00:02:12.200 CC lib/env_dpdk/pci_ioat.o 00:02:12.200 CC lib/idxd/idxd.o 00:02:12.200 CC lib/env_dpdk/pci_virtio.o 00:02:12.200 CC lib/idxd/idxd_user.o 00:02:12.200 CC lib/env_dpdk/pci_vmd.o 00:02:12.200 CC lib/rdma/common.o 00:02:12.200 CC lib/rdma/rdma_verbs.o 00:02:12.200 CC lib/env_dpdk/pci_idxd.o 00:02:12.200 CC lib/env_dpdk/pci_event.o 00:02:12.200 CC lib/env_dpdk/pci_dpdk.o 00:02:12.200 CC lib/env_dpdk/sigbus_handler.o 00:02:12.200 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.200 CC lib/reduce/reduce.o 00:02:12.200 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.465 LIB libspdk_conf.a 00:02:12.465 SO libspdk_conf.so.6.0 00:02:12.465 LIB libspdk_json.a 00:02:12.465 SO libspdk_json.so.6.0 00:02:12.465 SYMLINK libspdk_conf.so 00:02:12.727 LIB libspdk_rdma.a 00:02:12.727 SO libspdk_rdma.so.6.0 00:02:12.727 SYMLINK libspdk_json.so 00:02:12.727 SYMLINK libspdk_rdma.so 00:02:12.727 LIB libspdk_idxd.a 00:02:12.985 SO libspdk_idxd.so.12.0 00:02:12.985 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.985 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.985 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.985 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.985 LIB libspdk_reduce.a 00:02:12.985 LIB libspdk_vmd.a 00:02:12.985 SYMLINK libspdk_idxd.so 00:02:12.985 SO libspdk_reduce.so.6.0 00:02:12.985 SO libspdk_vmd.so.6.0 00:02:12.985 SYMLINK libspdk_reduce.so 00:02:12.985 SYMLINK libspdk_vmd.so 00:02:13.243 LIB libspdk_jsonrpc.a 00:02:13.243 SO libspdk_jsonrpc.so.6.0 00:02:13.243 SYMLINK libspdk_jsonrpc.so 00:02:13.810 CC lib/rpc/rpc.o 00:02:13.810 LIB libspdk_env_dpdk.a 00:02:13.810 SO libspdk_env_dpdk.so.14.0 00:02:13.810 LIB libspdk_rpc.a 00:02:13.810 SO libspdk_rpc.so.6.0 00:02:13.810 SYMLINK libspdk_rpc.so 00:02:13.810 SYMLINK libspdk_env_dpdk.so 00:02:14.069 CC lib/notify/notify.o 00:02:14.069 CC lib/notify/notify_rpc.o 00:02:14.327 CC lib/trace/trace_flags.o 00:02:14.327 CC lib/trace/trace.o 00:02:14.327 CC lib/trace/trace_rpc.o 00:02:14.327 CC lib/keyring/keyring.o 00:02:14.327 CC lib/keyring/keyring_rpc.o 00:02:14.327 LIB libspdk_notify.a 00:02:14.327 SO libspdk_notify.so.6.0 00:02:14.585 LIB libspdk_keyring.a 00:02:14.585 SYMLINK libspdk_notify.so 00:02:14.585 LIB libspdk_trace.a 00:02:14.585 SO libspdk_keyring.so.1.0 00:02:14.585 SO libspdk_trace.so.10.0 00:02:14.585 SYMLINK libspdk_keyring.so 00:02:14.585 SYMLINK libspdk_trace.so 00:02:14.844 CC lib/sock/sock.o 00:02:14.844 CC lib/thread/thread.o 00:02:14.844 CC lib/thread/iobuf.o 00:02:14.844 CC lib/sock/sock_rpc.o 00:02:15.412 LIB libspdk_sock.a 00:02:15.412 SO libspdk_sock.so.9.0 00:02:15.412 SYMLINK libspdk_sock.so 00:02:15.671 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.671 CC lib/nvme/nvme_ctrlr.o 00:02:15.671 CC lib/nvme/nvme_fabric.o 00:02:15.671 CC lib/nvme/nvme_ns_cmd.o 00:02:15.671 CC lib/nvme/nvme_ns.o 00:02:15.671 CC lib/nvme/nvme_pcie_common.o 00:02:15.671 CC lib/nvme/nvme_pcie.o 00:02:15.671 CC lib/nvme/nvme_transport.o 00:02:15.671 CC lib/nvme/nvme_qpair.o 00:02:15.671 CC lib/nvme/nvme.o 00:02:15.671 CC lib/nvme/nvme_quirks.o 00:02:15.671 CC lib/nvme/nvme_discovery.o 00:02:15.671 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.671 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.671 CC lib/nvme/nvme_opal.o 00:02:15.671 CC lib/nvme/nvme_tcp.o 00:02:15.671 CC lib/nvme/nvme_io_msg.o 00:02:15.671 CC lib/nvme/nvme_poll_group.o 00:02:15.671 CC lib/nvme/nvme_zns.o 00:02:15.671 CC lib/nvme/nvme_stubs.o 00:02:15.671 CC lib/nvme/nvme_auth.o 00:02:15.671 CC lib/nvme/nvme_cuse.o 00:02:15.671 CC lib/nvme/nvme_rdma.o 00:02:16.667 LIB libspdk_thread.a 00:02:16.667 SO libspdk_thread.so.10.0 00:02:16.667 SYMLINK libspdk_thread.so 00:02:16.925 CC lib/init/json_config.o 00:02:16.925 CC lib/init/subsystem.o 00:02:16.925 CC lib/init/subsystem_rpc.o 00:02:16.925 CC lib/init/rpc.o 00:02:16.925 CC lib/accel/accel.o 00:02:16.926 CC lib/accel/accel_sw.o 00:02:16.926 CC lib/accel/accel_rpc.o 00:02:16.926 CC lib/virtio/virtio.o 00:02:16.926 CC lib/virtio/virtio_vhost_user.o 00:02:16.926 CC lib/virtio/virtio_vfio_user.o 00:02:16.926 CC lib/virtio/virtio_pci.o 00:02:16.926 CC lib/blob/blobstore.o 00:02:16.926 CC lib/blob/request.o 00:02:16.926 CC lib/blob/zeroes.o 00:02:16.926 CC lib/blob/blob_bs_dev.o 00:02:17.184 LIB libspdk_init.a 00:02:17.184 SO libspdk_init.so.5.0 00:02:17.184 LIB libspdk_virtio.a 00:02:17.184 SYMLINK libspdk_init.so 00:02:17.184 SO libspdk_virtio.so.7.0 00:02:17.442 SYMLINK libspdk_virtio.so 00:02:17.442 CC lib/event/app.o 00:02:17.442 CC lib/event/reactor.o 00:02:17.442 CC lib/event/log_rpc.o 00:02:17.442 CC lib/event/app_rpc.o 00:02:17.442 CC lib/event/scheduler_static.o 00:02:18.009 LIB libspdk_accel.a 00:02:18.009 SO libspdk_accel.so.15.0 00:02:18.009 LIB libspdk_event.a 00:02:18.009 SO libspdk_event.so.13.0 00:02:18.009 SYMLINK libspdk_accel.so 00:02:18.009 LIB libspdk_nvme.a 00:02:18.009 SYMLINK libspdk_event.so 00:02:18.268 SO libspdk_nvme.so.13.0 00:02:18.268 CC lib/bdev/bdev.o 00:02:18.268 CC lib/bdev/bdev_rpc.o 00:02:18.268 CC lib/bdev/bdev_zone.o 00:02:18.268 CC lib/bdev/part.o 00:02:18.268 CC lib/bdev/scsi_nvme.o 00:02:18.527 SYMLINK libspdk_nvme.so 00:02:19.905 LIB libspdk_blob.a 00:02:19.905 SO libspdk_blob.so.11.0 00:02:20.164 SYMLINK libspdk_blob.so 00:02:20.423 CC lib/lvol/lvol.o 00:02:20.423 CC lib/blobfs/blobfs.o 00:02:20.423 CC lib/blobfs/tree.o 00:02:20.991 LIB libspdk_lvol.a 00:02:20.991 SO libspdk_lvol.so.10.0 00:02:20.991 LIB libspdk_bdev.a 00:02:20.991 SYMLINK libspdk_lvol.so 00:02:21.250 SO libspdk_bdev.so.15.0 00:02:21.250 LIB libspdk_blobfs.a 00:02:21.250 SYMLINK libspdk_bdev.so 00:02:21.250 SO libspdk_blobfs.so.10.0 00:02:21.250 SYMLINK libspdk_blobfs.so 00:02:21.508 CC lib/nbd/nbd.o 00:02:21.508 CC lib/nbd/nbd_rpc.o 00:02:21.508 CC lib/ublk/ublk_rpc.o 00:02:21.508 CC lib/ublk/ublk.o 00:02:21.508 CC lib/ftl/ftl_core.o 00:02:21.508 CC lib/ftl/ftl_init.o 00:02:21.508 CC lib/ftl/ftl_layout.o 00:02:21.508 CC lib/ftl/ftl_debug.o 00:02:21.508 CC lib/ftl/ftl_io.o 00:02:21.508 CC lib/scsi/dev.o 00:02:21.508 CC lib/ftl/ftl_sb.o 00:02:21.508 CC lib/ftl/ftl_l2p.o 00:02:21.508 CC lib/scsi/lun.o 00:02:21.508 CC lib/ftl/ftl_nv_cache.o 00:02:21.508 CC lib/ftl/ftl_l2p_flat.o 00:02:21.508 CC lib/scsi/port.o 00:02:21.508 CC lib/scsi/scsi.o 00:02:21.508 CC lib/scsi/scsi_pr.o 00:02:21.508 CC lib/scsi/scsi_bdev.o 00:02:21.508 CC lib/ftl/ftl_band.o 00:02:21.508 CC lib/nvmf/ctrlr_discovery.o 00:02:21.508 CC lib/nvmf/ctrlr.o 00:02:21.508 CC lib/ftl/ftl_band_ops.o 00:02:21.509 CC lib/ftl/ftl_rq.o 00:02:21.509 CC lib/scsi/scsi_rpc.o 00:02:21.509 CC lib/ftl/ftl_writer.o 00:02:21.509 CC lib/scsi/task.o 00:02:21.509 CC lib/nvmf/ctrlr_bdev.o 00:02:21.509 CC lib/nvmf/nvmf.o 00:02:21.509 CC lib/ftl/ftl_reloc.o 00:02:21.509 CC lib/nvmf/subsystem.o 00:02:21.509 CC lib/ftl/ftl_l2p_cache.o 00:02:21.509 CC lib/nvmf/nvmf_rpc.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.509 CC lib/nvmf/transport.o 00:02:21.509 CC lib/ftl/ftl_p2l.o 00:02:21.509 CC lib/nvmf/tcp.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.509 CC lib/nvmf/stubs.o 00:02:21.509 CC lib/nvmf/rdma.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.509 CC lib/nvmf/auth.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.509 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.509 CC lib/ftl/utils/ftl_conf.o 00:02:21.509 CC lib/ftl/utils/ftl_md.o 00:02:21.509 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.509 CC lib/ftl/utils/ftl_mempool.o 00:02:21.509 CC lib/ftl/utils/ftl_property.o 00:02:21.509 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.509 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.509 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.509 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.509 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.509 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.509 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.509 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.509 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.509 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.509 CC lib/ftl/base/ftl_base_dev.o 00:02:21.509 CC lib/ftl/base/ftl_base_bdev.o 00:02:21.509 CC lib/ftl/ftl_trace.o 00:02:22.451 LIB libspdk_nbd.a 00:02:22.451 SO libspdk_nbd.so.7.0 00:02:22.451 LIB libspdk_ublk.a 00:02:22.451 SO libspdk_ublk.so.3.0 00:02:22.451 SYMLINK libspdk_nbd.so 00:02:22.451 LIB libspdk_scsi.a 00:02:22.451 SYMLINK libspdk_ublk.so 00:02:22.451 SO libspdk_scsi.so.9.0 00:02:22.451 SYMLINK libspdk_scsi.so 00:02:22.709 LIB libspdk_ftl.a 00:02:22.709 SO libspdk_ftl.so.9.0 00:02:22.709 CC lib/iscsi/conn.o 00:02:22.709 CC lib/iscsi/init_grp.o 00:02:22.709 CC lib/iscsi/iscsi.o 00:02:22.709 CC lib/iscsi/md5.o 00:02:22.709 CC lib/iscsi/param.o 00:02:22.709 CC lib/iscsi/portal_grp.o 00:02:22.709 CC lib/iscsi/tgt_node.o 00:02:22.709 CC lib/iscsi/task.o 00:02:22.709 CC lib/iscsi/iscsi_subsystem.o 00:02:22.709 CC lib/iscsi/iscsi_rpc.o 00:02:22.709 CC lib/vhost/vhost.o 00:02:22.968 CC lib/vhost/vhost_rpc.o 00:02:22.968 CC lib/vhost/vhost_scsi.o 00:02:22.968 CC lib/vhost/vhost_blk.o 00:02:22.968 CC lib/vhost/rte_vhost_user.o 00:02:23.227 SYMLINK libspdk_ftl.so 00:02:23.796 LIB libspdk_nvmf.a 00:02:24.054 SO libspdk_nvmf.so.18.0 00:02:24.054 LIB libspdk_vhost.a 00:02:24.054 SO libspdk_vhost.so.8.0 00:02:24.054 SYMLINK libspdk_vhost.so 00:02:24.054 SYMLINK libspdk_nvmf.so 00:02:24.313 LIB libspdk_iscsi.a 00:02:24.313 SO libspdk_iscsi.so.8.0 00:02:24.572 SYMLINK libspdk_iscsi.so 00:02:24.831 CC module/env_dpdk/env_dpdk_rpc.o 00:02:25.090 CC module/accel/error/accel_error_rpc.o 00:02:25.090 CC module/accel/error/accel_error.o 00:02:25.090 CC module/keyring/file/keyring.o 00:02:25.090 CC module/keyring/file/keyring_rpc.o 00:02:25.090 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:25.090 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:25.090 CC module/accel/iaa/accel_iaa.o 00:02:25.090 CC module/accel/iaa/accel_iaa_rpc.o 00:02:25.090 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:25.090 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:25.090 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:25.090 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:25.090 CC module/blob/bdev/blob_bdev.o 00:02:25.090 CC module/scheduler/gscheduler/gscheduler.o 00:02:25.090 CC module/accel/ioat/accel_ioat.o 00:02:25.090 LIB libspdk_env_dpdk_rpc.a 00:02:25.090 CC module/accel/ioat/accel_ioat_rpc.o 00:02:25.090 CC module/sock/posix/posix.o 00:02:25.090 CC module/accel/dsa/accel_dsa.o 00:02:25.090 CC module/accel/dsa/accel_dsa_rpc.o 00:02:25.090 SO libspdk_env_dpdk_rpc.so.6.0 00:02:25.090 SYMLINK libspdk_env_dpdk_rpc.so 00:02:25.349 LIB libspdk_keyring_file.a 00:02:25.349 LIB libspdk_scheduler_dpdk_governor.a 00:02:25.349 SO libspdk_keyring_file.so.1.0 00:02:25.349 LIB libspdk_scheduler_gscheduler.a 00:02:25.349 LIB libspdk_accel_error.a 00:02:25.349 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:25.349 SO libspdk_scheduler_gscheduler.so.4.0 00:02:25.349 LIB libspdk_scheduler_dynamic.a 00:02:25.349 LIB libspdk_accel_ioat.a 00:02:25.349 SO libspdk_accel_error.so.2.0 00:02:25.349 LIB libspdk_accel_iaa.a 00:02:25.349 SYMLINK libspdk_keyring_file.so 00:02:25.349 SO libspdk_scheduler_dynamic.so.4.0 00:02:25.349 SO libspdk_accel_iaa.so.3.0 00:02:25.349 SO libspdk_accel_ioat.so.6.0 00:02:25.349 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:25.349 SYMLINK libspdk_scheduler_gscheduler.so 00:02:25.349 LIB libspdk_accel_dsa.a 00:02:25.349 LIB libspdk_blob_bdev.a 00:02:25.349 SYMLINK libspdk_accel_error.so 00:02:25.349 SYMLINK libspdk_scheduler_dynamic.so 00:02:25.349 SYMLINK libspdk_accel_iaa.so 00:02:25.349 SO libspdk_blob_bdev.so.11.0 00:02:25.349 SO libspdk_accel_dsa.so.5.0 00:02:25.349 SYMLINK libspdk_accel_ioat.so 00:02:25.608 SYMLINK libspdk_accel_dsa.so 00:02:25.608 SYMLINK libspdk_blob_bdev.so 00:02:25.866 LIB libspdk_sock_posix.a 00:02:25.866 LIB libspdk_accel_dpdk_compressdev.a 00:02:25.866 SO libspdk_sock_posix.so.6.0 00:02:25.866 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:25.866 CC module/bdev/gpt/gpt.o 00:02:25.866 CC module/bdev/gpt/vbdev_gpt.o 00:02:25.866 CC module/bdev/error/vbdev_error_rpc.o 00:02:25.866 CC module/bdev/delay/vbdev_delay.o 00:02:25.866 CC module/bdev/error/vbdev_error.o 00:02:25.866 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:25.866 CC module/bdev/ftl/bdev_ftl.o 00:02:25.866 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:25.866 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:25.866 CC module/bdev/nvme/bdev_nvme.o 00:02:25.866 CC module/bdev/nvme/vbdev_opal.o 00:02:25.866 CC module/blobfs/bdev/blobfs_bdev.o 00:02:25.866 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:25.866 CC module/bdev/nvme/nvme_rpc.o 00:02:25.867 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:25.867 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:25.867 CC module/bdev/nvme/bdev_mdns_client.o 00:02:25.867 CC module/bdev/iscsi/bdev_iscsi.o 00:02:25.867 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:25.867 CC module/bdev/passthru/vbdev_passthru.o 00:02:25.867 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:25.867 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:25.867 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:25.867 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:25.867 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:25.867 CC module/bdev/lvol/vbdev_lvol.o 00:02:25.867 CC module/bdev/malloc/bdev_malloc.o 00:02:25.867 CC module/bdev/null/bdev_null.o 00:02:25.867 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:25.867 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:25.867 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:25.867 CC module/bdev/raid/bdev_raid.o 00:02:25.867 CC module/bdev/aio/bdev_aio_rpc.o 00:02:25.867 CC module/bdev/aio/bdev_aio.o 00:02:25.867 CC module/bdev/null/bdev_null_rpc.o 00:02:25.867 CC module/bdev/raid/bdev_raid_rpc.o 00:02:25.867 CC module/bdev/raid/raid0.o 00:02:25.867 CC module/bdev/raid/bdev_raid_sb.o 00:02:25.867 CC module/bdev/raid/raid1.o 00:02:25.867 CC module/bdev/raid/concat.o 00:02:25.867 CC module/bdev/compress/vbdev_compress.o 00:02:25.867 CC module/bdev/split/vbdev_split.o 00:02:25.867 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:25.867 CC module/bdev/split/vbdev_split_rpc.o 00:02:25.867 CC module/bdev/crypto/vbdev_crypto.o 00:02:25.867 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:26.125 SYMLINK libspdk_sock_posix.so 00:02:26.125 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:26.384 LIB libspdk_blobfs_bdev.a 00:02:26.384 LIB libspdk_bdev_split.a 00:02:26.384 SO libspdk_blobfs_bdev.so.6.0 00:02:26.384 LIB libspdk_bdev_error.a 00:02:26.384 LIB libspdk_bdev_gpt.a 00:02:26.384 SO libspdk_bdev_split.so.6.0 00:02:26.384 SO libspdk_bdev_error.so.6.0 00:02:26.384 SO libspdk_bdev_gpt.so.6.0 00:02:26.384 LIB libspdk_bdev_null.a 00:02:26.384 SYMLINK libspdk_blobfs_bdev.so 00:02:26.384 LIB libspdk_bdev_zone_block.a 00:02:26.384 LIB libspdk_bdev_ftl.a 00:02:26.384 SO libspdk_bdev_null.so.6.0 00:02:26.384 LIB libspdk_bdev_delay.a 00:02:26.384 LIB libspdk_bdev_aio.a 00:02:26.384 SYMLINK libspdk_bdev_split.so 00:02:26.384 LIB libspdk_bdev_passthru.a 00:02:26.384 SO libspdk_bdev_zone_block.so.6.0 00:02:26.384 SYMLINK libspdk_bdev_error.so 00:02:26.384 SO libspdk_bdev_ftl.so.6.0 00:02:26.384 LIB libspdk_bdev_iscsi.a 00:02:26.384 SYMLINK libspdk_bdev_gpt.so 00:02:26.384 LIB libspdk_bdev_malloc.a 00:02:26.384 SO libspdk_bdev_aio.so.6.0 00:02:26.384 LIB libspdk_bdev_crypto.a 00:02:26.384 SO libspdk_bdev_delay.so.6.0 00:02:26.384 SO libspdk_bdev_passthru.so.6.0 00:02:26.384 SO libspdk_bdev_iscsi.so.6.0 00:02:26.384 SYMLINK libspdk_bdev_null.so 00:02:26.384 SO libspdk_bdev_malloc.so.6.0 00:02:26.384 SO libspdk_bdev_crypto.so.6.0 00:02:26.384 SYMLINK libspdk_bdev_zone_block.so 00:02:26.384 LIB libspdk_bdev_compress.a 00:02:26.384 SYMLINK libspdk_bdev_ftl.so 00:02:26.642 SYMLINK libspdk_bdev_aio.so 00:02:26.642 SYMLINK libspdk_bdev_delay.so 00:02:26.642 SYMLINK libspdk_bdev_passthru.so 00:02:26.642 SO libspdk_bdev_compress.so.6.0 00:02:26.642 SYMLINK libspdk_bdev_iscsi.so 00:02:26.642 SYMLINK libspdk_bdev_malloc.so 00:02:26.642 SYMLINK libspdk_bdev_crypto.so 00:02:26.642 LIB libspdk_accel_dpdk_cryptodev.a 00:02:26.642 LIB libspdk_bdev_lvol.a 00:02:26.642 LIB libspdk_bdev_virtio.a 00:02:26.642 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:26.642 SYMLINK libspdk_bdev_compress.so 00:02:26.642 SO libspdk_bdev_lvol.so.6.0 00:02:26.642 SO libspdk_bdev_virtio.so.6.0 00:02:26.642 SYMLINK libspdk_bdev_lvol.so 00:02:26.642 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:26.642 SYMLINK libspdk_bdev_virtio.so 00:02:27.210 LIB libspdk_bdev_raid.a 00:02:27.210 SO libspdk_bdev_raid.so.6.0 00:02:27.210 SYMLINK libspdk_bdev_raid.so 00:02:28.587 LIB libspdk_bdev_nvme.a 00:02:28.587 SO libspdk_bdev_nvme.so.7.0 00:02:28.587 SYMLINK libspdk_bdev_nvme.so 00:02:29.154 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.154 CC module/event/subsystems/vmd/vmd.o 00:02:29.154 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.154 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.154 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.154 CC module/event/subsystems/keyring/keyring.o 00:02:29.154 CC module/event/subsystems/sock/sock.o 00:02:29.154 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.154 LIB libspdk_event_scheduler.a 00:02:29.154 SO libspdk_event_scheduler.so.4.0 00:02:29.154 LIB libspdk_event_sock.a 00:02:29.154 LIB libspdk_event_keyring.a 00:02:29.154 LIB libspdk_event_vmd.a 00:02:29.154 LIB libspdk_event_vhost_blk.a 00:02:29.154 LIB libspdk_event_iobuf.a 00:02:29.154 SO libspdk_event_sock.so.5.0 00:02:29.154 SO libspdk_event_keyring.so.1.0 00:02:29.154 SO libspdk_event_vhost_blk.so.3.0 00:02:29.154 SO libspdk_event_vmd.so.6.0 00:02:29.411 SO libspdk_event_iobuf.so.3.0 00:02:29.411 SYMLINK libspdk_event_scheduler.so 00:02:29.411 SYMLINK libspdk_event_keyring.so 00:02:29.411 SYMLINK libspdk_event_sock.so 00:02:29.411 SYMLINK libspdk_event_vhost_blk.so 00:02:29.411 SYMLINK libspdk_event_vmd.so 00:02:29.411 SYMLINK libspdk_event_iobuf.so 00:02:29.670 CC module/event/subsystems/accel/accel.o 00:02:29.929 LIB libspdk_event_accel.a 00:02:29.929 SO libspdk_event_accel.so.6.0 00:02:29.929 SYMLINK libspdk_event_accel.so 00:02:30.188 CC module/event/subsystems/bdev/bdev.o 00:02:30.479 LIB libspdk_event_bdev.a 00:02:30.479 SO libspdk_event_bdev.so.6.0 00:02:30.479 SYMLINK libspdk_event_bdev.so 00:02:30.737 CC module/event/subsystems/scsi/scsi.o 00:02:30.737 CC module/event/subsystems/nbd/nbd.o 00:02:30.737 CC module/event/subsystems/ublk/ublk.o 00:02:30.737 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:30.737 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:30.995 LIB libspdk_event_nbd.a 00:02:30.995 LIB libspdk_event_scsi.a 00:02:30.995 LIB libspdk_event_ublk.a 00:02:30.995 SO libspdk_event_nbd.so.6.0 00:02:30.995 SO libspdk_event_scsi.so.6.0 00:02:30.995 SO libspdk_event_ublk.so.3.0 00:02:30.995 SYMLINK libspdk_event_nbd.so 00:02:30.995 LIB libspdk_event_nvmf.a 00:02:30.995 SYMLINK libspdk_event_scsi.so 00:02:30.995 SYMLINK libspdk_event_ublk.so 00:02:31.254 SO libspdk_event_nvmf.so.6.0 00:02:31.254 SYMLINK libspdk_event_nvmf.so 00:02:31.513 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.513 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.513 LIB libspdk_event_vhost_scsi.a 00:02:31.513 LIB libspdk_event_iscsi.a 00:02:31.513 SO libspdk_event_vhost_scsi.so.3.0 00:02:31.513 SO libspdk_event_iscsi.so.6.0 00:02:31.772 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.772 SYMLINK libspdk_event_iscsi.so 00:02:31.772 SO libspdk.so.6.0 00:02:31.772 SYMLINK libspdk.so 00:02:32.344 CXX app/trace/trace.o 00:02:32.344 CC app/spdk_top/spdk_top.o 00:02:32.344 CC app/trace_record/trace_record.o 00:02:32.344 CC app/spdk_nvme_perf/perf.o 00:02:32.344 TEST_HEADER include/spdk/accel.h 00:02:32.344 TEST_HEADER include/spdk/accel_module.h 00:02:32.344 TEST_HEADER include/spdk/barrier.h 00:02:32.344 TEST_HEADER include/spdk/assert.h 00:02:32.344 TEST_HEADER include/spdk/base64.h 00:02:32.344 TEST_HEADER include/spdk/bdev.h 00:02:32.344 TEST_HEADER include/spdk/bdev_module.h 00:02:32.344 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.344 CC test/rpc_client/rpc_client_test.o 00:02:32.344 CC app/spdk_lspci/spdk_lspci.o 00:02:32.344 CC app/spdk_nvme_identify/identify.o 00:02:32.344 TEST_HEADER include/spdk/bit_array.h 00:02:32.344 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.344 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.344 TEST_HEADER include/spdk/bit_pool.h 00:02:32.344 TEST_HEADER include/spdk/blobfs.h 00:02:32.344 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.344 TEST_HEADER include/spdk/blob.h 00:02:32.344 TEST_HEADER include/spdk/config.h 00:02:32.344 TEST_HEADER include/spdk/cpuset.h 00:02:32.344 TEST_HEADER include/spdk/conf.h 00:02:32.344 TEST_HEADER include/spdk/crc16.h 00:02:32.344 TEST_HEADER include/spdk/crc32.h 00:02:32.344 TEST_HEADER include/spdk/dif.h 00:02:32.344 TEST_HEADER include/spdk/dma.h 00:02:32.344 TEST_HEADER include/spdk/crc64.h 00:02:32.344 TEST_HEADER include/spdk/endian.h 00:02:32.344 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.344 TEST_HEADER include/spdk/env.h 00:02:32.344 TEST_HEADER include/spdk/event.h 00:02:32.344 TEST_HEADER include/spdk/fd.h 00:02:32.344 TEST_HEADER include/spdk/fd_group.h 00:02:32.344 TEST_HEADER include/spdk/file.h 00:02:32.344 TEST_HEADER include/spdk/ftl.h 00:02:32.344 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.344 TEST_HEADER include/spdk/hexlify.h 00:02:32.344 TEST_HEADER include/spdk/idxd.h 00:02:32.344 TEST_HEADER include/spdk/histogram_data.h 00:02:32.344 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.344 TEST_HEADER include/spdk/ioat.h 00:02:32.344 TEST_HEADER include/spdk/init.h 00:02:32.344 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.344 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.344 TEST_HEADER include/spdk/json.h 00:02:32.344 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.344 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.344 TEST_HEADER include/spdk/keyring.h 00:02:32.344 TEST_HEADER include/spdk/keyring_module.h 00:02:32.344 TEST_HEADER include/spdk/likely.h 00:02:32.344 TEST_HEADER include/spdk/lvol.h 00:02:32.344 TEST_HEADER include/spdk/log.h 00:02:32.344 TEST_HEADER include/spdk/memory.h 00:02:32.344 TEST_HEADER include/spdk/mmio.h 00:02:32.344 TEST_HEADER include/spdk/nbd.h 00:02:32.344 TEST_HEADER include/spdk/nvme.h 00:02:32.344 TEST_HEADER include/spdk/notify.h 00:02:32.344 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.344 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.344 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.344 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.344 CC app/nvmf_tgt/nvmf_main.o 00:02:32.344 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.344 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.344 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.344 TEST_HEADER include/spdk/nvmf.h 00:02:32.344 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.344 CC app/vhost/vhost.o 00:02:32.344 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.344 TEST_HEADER include/spdk/opal.h 00:02:32.344 TEST_HEADER include/spdk/pci_ids.h 00:02:32.344 TEST_HEADER include/spdk/opal_spec.h 00:02:32.344 TEST_HEADER include/spdk/queue.h 00:02:32.344 TEST_HEADER include/spdk/reduce.h 00:02:32.344 TEST_HEADER include/spdk/pipe.h 00:02:32.344 TEST_HEADER include/spdk/rpc.h 00:02:32.344 TEST_HEADER include/spdk/scheduler.h 00:02:32.344 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.344 TEST_HEADER include/spdk/scsi.h 00:02:32.344 TEST_HEADER include/spdk/sock.h 00:02:32.344 TEST_HEADER include/spdk/stdinc.h 00:02:32.344 TEST_HEADER include/spdk/thread.h 00:02:32.344 TEST_HEADER include/spdk/string.h 00:02:32.344 TEST_HEADER include/spdk/trace.h 00:02:32.344 TEST_HEADER include/spdk/trace_parser.h 00:02:32.344 CC app/spdk_dd/spdk_dd.o 00:02:32.344 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.344 TEST_HEADER include/spdk/tree.h 00:02:32.344 TEST_HEADER include/spdk/ublk.h 00:02:32.344 TEST_HEADER include/spdk/util.h 00:02:32.344 TEST_HEADER include/spdk/uuid.h 00:02:32.344 TEST_HEADER include/spdk/version.h 00:02:32.344 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.344 TEST_HEADER include/spdk/vhost.h 00:02:32.344 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.344 TEST_HEADER include/spdk/vmd.h 00:02:32.344 TEST_HEADER include/spdk/xor.h 00:02:32.344 TEST_HEADER include/spdk/zipf.h 00:02:32.344 CXX test/cpp_headers/accel.o 00:02:32.344 CXX test/cpp_headers/accel_module.o 00:02:32.344 CC app/spdk_tgt/spdk_tgt.o 00:02:32.344 CXX test/cpp_headers/barrier.o 00:02:32.344 CXX test/cpp_headers/assert.o 00:02:32.344 CXX test/cpp_headers/base64.o 00:02:32.344 CXX test/cpp_headers/bdev.o 00:02:32.344 CXX test/cpp_headers/bdev_module.o 00:02:32.344 CXX test/cpp_headers/bdev_zone.o 00:02:32.344 CXX test/cpp_headers/bit_array.o 00:02:32.344 CXX test/cpp_headers/bit_pool.o 00:02:32.344 CXX test/cpp_headers/blob_bdev.o 00:02:32.344 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.344 CXX test/cpp_headers/blobfs.o 00:02:32.344 CXX test/cpp_headers/conf.o 00:02:32.344 CXX test/cpp_headers/blob.o 00:02:32.344 CXX test/cpp_headers/config.o 00:02:32.344 CXX test/cpp_headers/cpuset.o 00:02:32.344 CXX test/cpp_headers/crc16.o 00:02:32.344 CXX test/cpp_headers/crc32.o 00:02:32.344 CXX test/cpp_headers/crc64.o 00:02:32.344 CXX test/cpp_headers/dif.o 00:02:32.344 CC examples/util/zipf/zipf.o 00:02:32.611 CXX test/cpp_headers/dma.o 00:02:32.611 CC test/app/jsoncat/jsoncat.o 00:02:32.611 CC test/app/histogram_perf/histogram_perf.o 00:02:32.611 CC examples/ioat/perf/perf.o 00:02:32.611 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.611 CC examples/nvme/hotplug/hotplug.o 00:02:32.611 CC examples/nvme/arbitration/arbitration.o 00:02:32.611 CC examples/idxd/perf/perf.o 00:02:32.611 CC examples/vmd/led/led.o 00:02:32.611 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.611 CC test/env/pci/pci_ut.o 00:02:32.611 CC test/env/vtophys/vtophys.o 00:02:32.611 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.611 CC test/event/reactor_perf/reactor_perf.o 00:02:32.611 CC examples/ioat/verify/verify.o 00:02:32.611 CC examples/accel/perf/accel_perf.o 00:02:32.611 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.611 CC examples/nvme/reconnect/reconnect.o 00:02:32.611 CC test/app/stub/stub.o 00:02:32.611 CC test/nvme/boot_partition/boot_partition.o 00:02:32.611 CC test/env/memory/memory_ut.o 00:02:32.611 CC examples/nvme/abort/abort.o 00:02:32.611 CC test/event/reactor/reactor.o 00:02:32.611 CC test/thread/poller_perf/poller_perf.o 00:02:32.611 CC test/nvme/e2edp/nvme_dp.o 00:02:32.611 CC examples/nvmf/nvmf/nvmf.o 00:02:32.611 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.611 CC test/event/app_repeat/app_repeat.o 00:02:32.611 CC examples/nvme/hello_world/hello_world.o 00:02:32.611 CC app/fio/nvme/fio_plugin.o 00:02:32.611 CC test/nvme/aer/aer.o 00:02:32.611 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.611 CC examples/sock/hello_world/hello_sock.o 00:02:32.611 CC test/event/event_perf/event_perf.o 00:02:32.611 CC test/nvme/reset/reset.o 00:02:32.611 CC test/nvme/connect_stress/connect_stress.o 00:02:32.611 CC test/nvme/startup/startup.o 00:02:32.611 CC examples/thread/thread/thread_ex.o 00:02:32.611 CC test/nvme/cuse/cuse.o 00:02:32.611 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.611 CC test/nvme/sgl/sgl.o 00:02:32.611 CC test/nvme/err_injection/err_injection.o 00:02:32.611 CC test/nvme/overhead/overhead.o 00:02:32.611 CC test/nvme/fdp/fdp.o 00:02:32.611 CC test/app/bdev_svc/bdev_svc.o 00:02:32.611 CC test/nvme/simple_copy/simple_copy.o 00:02:32.611 CC test/nvme/compliance/nvme_compliance.o 00:02:32.611 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.611 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.611 CC test/event/scheduler/scheduler.o 00:02:32.611 CC test/dma/test_dma/test_dma.o 00:02:32.611 CC test/nvme/reserve/reserve.o 00:02:32.611 CC test/blobfs/mkfs/mkfs.o 00:02:32.611 CC test/accel/dif/dif.o 00:02:32.611 CC test/bdev/bdevio/bdevio.o 00:02:32.611 CC examples/blob/cli/blobcli.o 00:02:32.611 CC app/fio/bdev/fio_plugin.o 00:02:32.611 CC examples/blob/hello_world/hello_blob.o 00:02:32.879 LINK rpc_client_test 00:02:32.879 LINK interrupt_tgt 00:02:32.879 LINK spdk_lspci 00:02:32.879 LINK nvmf_tgt 00:02:32.879 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:32.879 CC test/env/mem_callbacks/mem_callbacks.o 00:02:32.879 LINK vhost 00:02:32.879 LINK spdk_trace_record 00:02:32.879 LINK jsoncat 00:02:32.879 CC test/lvol/esnap/esnap.o 00:02:32.879 LINK histogram_perf 00:02:32.879 LINK lsvmd 00:02:32.879 LINK spdk_nvme_discover 00:02:33.141 LINK reactor 00:02:33.141 LINK poller_perf 00:02:33.141 LINK verify 00:02:33.141 LINK pmr_persistence 00:02:33.141 LINK iscsi_tgt 00:02:33.141 CXX test/cpp_headers/endian.o 00:02:33.141 LINK zipf 00:02:33.141 LINK boot_partition 00:02:33.141 LINK reactor_perf 00:02:33.141 LINK env_dpdk_post_init 00:02:33.141 LINK cmb_copy 00:02:33.141 CXX test/cpp_headers/env_dpdk.o 00:02:33.141 LINK vtophys 00:02:33.141 LINK event_perf 00:02:33.141 LINK spdk_tgt 00:02:33.142 CXX test/cpp_headers/env.o 00:02:33.142 CXX test/cpp_headers/event.o 00:02:33.142 CXX test/cpp_headers/fd_group.o 00:02:33.142 CXX test/cpp_headers/fd.o 00:02:33.142 LINK app_repeat 00:02:33.142 LINK led 00:02:33.142 CXX test/cpp_headers/file.o 00:02:33.142 CXX test/cpp_headers/ftl.o 00:02:33.142 LINK bdev_svc 00:02:33.142 CXX test/cpp_headers/gpt_spec.o 00:02:33.142 CXX test/cpp_headers/hexlify.o 00:02:33.142 LINK startup 00:02:33.142 LINK connect_stress 00:02:33.142 CXX test/cpp_headers/histogram_data.o 00:02:33.142 CXX test/cpp_headers/idxd.o 00:02:33.142 CXX test/cpp_headers/idxd_spec.o 00:02:33.142 CXX test/cpp_headers/init.o 00:02:33.142 LINK doorbell_aers 00:02:33.142 CXX test/cpp_headers/ioat.o 00:02:33.142 CXX test/cpp_headers/ioat_spec.o 00:02:33.142 LINK err_injection 00:02:33.142 LINK ioat_perf 00:02:33.142 LINK stub 00:02:33.142 CXX test/cpp_headers/iscsi_spec.o 00:02:33.142 CXX test/cpp_headers/json.o 00:02:33.409 CXX test/cpp_headers/jsonrpc.o 00:02:33.409 LINK hotplug 00:02:33.409 LINK mkfs 00:02:33.409 LINK hello_world 00:02:33.409 LINK fused_ordering 00:02:33.409 LINK sgl 00:02:33.409 LINK reserve 00:02:33.409 CXX test/cpp_headers/keyring.o 00:02:33.409 LINK simple_copy 00:02:33.409 LINK hello_sock 00:02:33.409 LINK scheduler 00:02:33.409 LINK hello_bdev 00:02:33.409 CXX test/cpp_headers/keyring_module.o 00:02:33.409 LINK reconnect 00:02:33.409 LINK reset 00:02:33.409 LINK nvme_dp 00:02:33.409 LINK hello_blob 00:02:33.409 CXX test/cpp_headers/likely.o 00:02:33.409 LINK aer 00:02:33.409 LINK spdk_dd 00:02:33.409 CXX test/cpp_headers/log.o 00:02:33.409 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.409 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.409 LINK idxd_perf 00:02:33.409 LINK arbitration 00:02:33.409 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.409 LINK nvmf 00:02:33.409 LINK thread 00:02:33.409 LINK overhead 00:02:33.409 CXX test/cpp_headers/lvol.o 00:02:33.409 LINK fdp 00:02:33.409 LINK nvme_compliance 00:02:33.409 CXX test/cpp_headers/memory.o 00:02:33.681 LINK abort 00:02:33.681 CXX test/cpp_headers/mmio.o 00:02:33.681 CXX test/cpp_headers/nbd.o 00:02:33.681 CXX test/cpp_headers/notify.o 00:02:33.681 LINK pci_ut 00:02:33.681 CXX test/cpp_headers/nvme.o 00:02:33.681 LINK test_dma 00:02:33.681 CXX test/cpp_headers/nvme_intel.o 00:02:33.681 LINK spdk_trace 00:02:33.681 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.681 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.681 CXX test/cpp_headers/nvme_spec.o 00:02:33.681 LINK accel_perf 00:02:33.681 CXX test/cpp_headers/nvme_zns.o 00:02:33.681 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.681 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.681 CXX test/cpp_headers/nvmf.o 00:02:33.681 CXX test/cpp_headers/nvmf_spec.o 00:02:33.681 LINK bdevio 00:02:33.681 LINK dif 00:02:33.681 CXX test/cpp_headers/nvmf_transport.o 00:02:33.681 CXX test/cpp_headers/opal.o 00:02:33.681 CXX test/cpp_headers/opal_spec.o 00:02:33.682 LINK nvme_manage 00:02:33.682 CXX test/cpp_headers/pci_ids.o 00:02:33.682 CXX test/cpp_headers/pipe.o 00:02:33.682 CXX test/cpp_headers/queue.o 00:02:33.682 CXX test/cpp_headers/reduce.o 00:02:33.682 CXX test/cpp_headers/rpc.o 00:02:33.682 CXX test/cpp_headers/scheduler.o 00:02:33.682 CXX test/cpp_headers/scsi.o 00:02:33.682 CXX test/cpp_headers/scsi_spec.o 00:02:33.682 CXX test/cpp_headers/sock.o 00:02:33.682 CXX test/cpp_headers/string.o 00:02:33.682 CXX test/cpp_headers/stdinc.o 00:02:33.682 CXX test/cpp_headers/thread.o 00:02:33.682 CXX test/cpp_headers/trace.o 00:02:33.682 CXX test/cpp_headers/tree.o 00:02:33.682 CXX test/cpp_headers/trace_parser.o 00:02:33.682 CXX test/cpp_headers/ublk.o 00:02:33.682 CXX test/cpp_headers/util.o 00:02:33.682 CXX test/cpp_headers/uuid.o 00:02:33.682 CXX test/cpp_headers/version.o 00:02:33.682 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.949 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.949 CXX test/cpp_headers/vhost.o 00:02:33.949 CXX test/cpp_headers/vmd.o 00:02:33.949 CXX test/cpp_headers/xor.o 00:02:33.949 CXX test/cpp_headers/zipf.o 00:02:33.949 LINK blobcli 00:02:33.949 LINK nvme_fuzz 00:02:33.949 LINK spdk_nvme 00:02:33.949 LINK spdk_bdev 00:02:34.207 LINK spdk_nvme_perf 00:02:34.207 LINK mem_callbacks 00:02:34.207 LINK vhost_fuzz 00:02:34.207 LINK bdevperf 00:02:34.207 LINK spdk_nvme_identify 00:02:34.207 LINK spdk_top 00:02:34.466 LINK cuse 00:02:34.466 LINK memory_ut 00:02:35.843 LINK iscsi_fuzz 00:02:38.381 LINK esnap 00:02:38.949 00:02:38.949 real 1m24.854s 00:02:38.949 user 17m47.006s 00:02:38.949 sys 4m54.464s 00:02:38.949 02:57:09 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.949 02:57:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:38.949 ************************************ 00:02:38.949 END TEST make 00:02:38.949 ************************************ 00:02:38.949 02:57:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.949 02:57:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:38.949 02:57:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:38.949 02:57:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.949 02:57:09 -- pm/common@44 -- $ pid=3868314 00:02:38.949 02:57:09 -- pm/common@50 -- $ kill -TERM 3868314 00:02:38.949 02:57:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:38.949 02:57:09 -- pm/common@44 -- $ pid=3868316 00:02:38.949 02:57:09 -- pm/common@50 -- $ kill -TERM 3868316 00:02:38.949 02:57:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:38.949 02:57:09 -- pm/common@44 -- $ pid=3868317 00:02:38.949 02:57:09 -- pm/common@50 -- $ kill -TERM 3868317 00:02:38.949 02:57:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:38.949 02:57:09 -- pm/common@44 -- $ pid=3868349 00:02:38.949 02:57:09 -- pm/common@50 -- $ sudo -E kill -TERM 3868349 00:02:38.949 02:57:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:38.949 02:57:09 -- nvmf/common.sh@7 -- # uname -s 00:02:38.949 02:57:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:38.949 02:57:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:38.949 02:57:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:38.949 02:57:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:38.949 02:57:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:38.949 02:57:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:38.949 02:57:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:38.949 02:57:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:38.949 02:57:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:38.949 02:57:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:38.949 02:57:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:38.949 02:57:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:38.949 02:57:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:38.949 02:57:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:38.949 02:57:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:38.949 02:57:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:38.949 02:57:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:38.949 02:57:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:38.949 02:57:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.949 02:57:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.949 02:57:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.949 02:57:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.949 02:57:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.949 02:57:10 -- paths/export.sh@5 -- # export PATH 00:02:38.949 02:57:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.949 02:57:10 -- nvmf/common.sh@47 -- # : 0 00:02:38.949 02:57:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:38.949 02:57:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:38.949 02:57:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:38.949 02:57:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:38.949 02:57:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:38.949 02:57:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:38.949 02:57:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:38.949 02:57:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:38.949 02:57:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:38.949 02:57:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:38.949 02:57:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:38.949 02:57:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:38.949 02:57:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:38.949 02:57:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:38.949 02:57:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:38.949 02:57:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:38.949 02:57:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:38.949 02:57:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:38.949 02:57:10 -- spdk/autotest.sh@48 -- # udevadm_pid=3937169 00:02:38.949 02:57:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:38.949 02:57:10 -- pm/common@17 -- # local monitor 00:02:38.949 02:57:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:38.949 02:57:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:10 -- pm/common@21 -- # date +%s 00:02:38.949 02:57:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.949 02:57:10 -- pm/common@25 -- # sleep 1 00:02:38.949 02:57:10 -- pm/common@21 -- # date +%s 00:02:38.949 02:57:10 -- pm/common@21 -- # date +%s 00:02:38.949 02:57:10 -- pm/common@21 -- # date +%s 00:02:38.949 02:57:10 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734630 00:02:38.949 02:57:10 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734630 00:02:38.949 02:57:10 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734630 00:02:38.949 02:57:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734630 00:02:38.949 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734630_collect-vmstat.pm.log 00:02:38.949 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734630_collect-cpu-load.pm.log 00:02:38.949 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734630_collect-cpu-temp.pm.log 00:02:39.208 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734630_collect-bmc-pm.bmc.pm.log 00:02:40.146 02:57:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.146 02:57:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.146 02:57:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:40.146 02:57:11 -- common/autotest_common.sh@10 -- # set +x 00:02:40.146 02:57:11 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.146 02:57:11 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:40.146 02:57:11 -- common/autotest_common.sh@10 -- # set +x 00:02:40.146 02:57:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:40.146 02:57:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:40.146 02:57:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:40.146 02:57:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:40.146 02:57:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:40.146 02:57:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:40.146 02:57:11 -- common/autotest_common.sh@1451 -- # uname 00:02:40.146 02:57:11 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:40.146 02:57:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:40.146 02:57:11 -- common/autotest_common.sh@1471 -- # uname 00:02:40.146 02:57:11 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:40.146 02:57:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:40.146 02:57:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:40.146 02:57:11 -- spdk/autotest.sh@72 -- # hash lcov 00:02:40.146 02:57:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:40.146 02:57:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:40.146 --rc lcov_branch_coverage=1 00:02:40.146 --rc lcov_function_coverage=1 00:02:40.146 --rc genhtml_branch_coverage=1 00:02:40.146 --rc genhtml_function_coverage=1 00:02:40.146 --rc genhtml_legend=1 00:02:40.146 --rc geninfo_all_blocks=1 00:02:40.146 ' 00:02:40.146 02:57:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:40.146 --rc lcov_branch_coverage=1 00:02:40.146 --rc lcov_function_coverage=1 00:02:40.146 --rc genhtml_branch_coverage=1 00:02:40.146 --rc genhtml_function_coverage=1 00:02:40.146 --rc genhtml_legend=1 00:02:40.146 --rc geninfo_all_blocks=1 00:02:40.146 ' 00:02:40.146 02:57:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:40.146 --rc lcov_branch_coverage=1 00:02:40.146 --rc lcov_function_coverage=1 00:02:40.146 --rc genhtml_branch_coverage=1 00:02:40.146 --rc genhtml_function_coverage=1 00:02:40.146 --rc genhtml_legend=1 00:02:40.146 --rc geninfo_all_blocks=1 00:02:40.146 --no-external' 00:02:40.146 02:57:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:40.146 --rc lcov_branch_coverage=1 00:02:40.146 --rc lcov_function_coverage=1 00:02:40.146 --rc genhtml_branch_coverage=1 00:02:40.146 --rc genhtml_function_coverage=1 00:02:40.146 --rc genhtml_legend=1 00:02:40.146 --rc geninfo_all_blocks=1 00:02:40.146 --no-external' 00:02:40.146 02:57:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:40.146 lcov: LCOV version 1.14 00:02:40.146 02:57:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:55.026 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.026 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:55.285 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.285 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:55.285 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.285 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:55.285 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.285 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:13.392 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:13.392 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:13.393 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:13.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:15.296 02:57:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:15.296 02:57:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:15.296 02:57:46 -- common/autotest_common.sh@10 -- # set +x 00:03:15.296 02:57:46 -- spdk/autotest.sh@91 -- # rm -f 00:03:15.296 02:57:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.582 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:03:18.582 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:18.582 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:18.582 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:18.841 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:18.841 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:18.841 02:57:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:18.841 02:57:49 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:18.841 02:57:49 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:18.841 02:57:49 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:18.841 02:57:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:18.841 02:57:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:18.841 02:57:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:18.841 02:57:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.841 02:57:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:18.841 02:57:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:18.841 02:57:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:03:18.841 02:57:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:03:18.841 02:57:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:18.841 02:57:49 -- common/autotest_common.sh@1661 -- # [[ host-managed != none ]] 00:03:18.841 02:57:49 -- common/autotest_common.sh@1670 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:18.841 02:57:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:18.841 02:57:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:18.841 02:57:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:18.841 02:57:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:18.841 02:57:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:18.841 02:57:49 -- spdk/autotest.sh@98 -- # (( 1 > 0 )) 00:03:18.842 02:57:49 -- spdk/autotest.sh@103 -- # export PCI_BLOCKED=0000:5f:00.0 00:03:18.842 02:57:49 -- spdk/autotest.sh@103 -- # PCI_BLOCKED=0000:5f:00.0 00:03:18.842 02:57:49 -- spdk/autotest.sh@104 -- # export PCI_ZONED=0000:5f:00.0 00:03:18.842 02:57:49 -- spdk/autotest.sh@104 -- # PCI_ZONED=0000:5f:00.0 00:03:18.842 02:57:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.842 02:57:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:18.842 02:57:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:18.842 02:57:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:18.842 02:57:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:18.842 No valid GPT data, bailing 00:03:18.842 02:57:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:18.842 02:57:49 -- scripts/common.sh@391 -- # pt= 00:03:18.842 02:57:49 -- scripts/common.sh@392 -- # return 1 00:03:18.842 02:57:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:18.842 1+0 records in 00:03:18.842 1+0 records out 00:03:18.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00247035 s, 424 MB/s 00:03:18.842 02:57:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.842 02:57:49 -- spdk/autotest.sh@112 -- # [[ -z 0000:5f:00.0 ]] 00:03:18.842 02:57:49 -- spdk/autotest.sh@112 -- # continue 00:03:18.842 02:57:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.842 02:57:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:18.842 02:57:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:18.842 02:57:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:18.842 02:57:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:18.842 No valid GPT data, bailing 00:03:18.842 02:57:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:18.842 02:57:49 -- scripts/common.sh@391 -- # pt= 00:03:19.100 02:57:49 -- scripts/common.sh@392 -- # return 1 00:03:19.100 02:57:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:19.100 1+0 records in 00:03:19.100 1+0 records out 00:03:19.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324014 s, 324 MB/s 00:03:19.100 02:57:50 -- spdk/autotest.sh@118 -- # sync 00:03:19.100 02:57:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:19.100 02:57:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:19.100 02:57:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:24.371 02:57:55 -- spdk/autotest.sh@124 -- # uname -s 00:03:24.371 02:57:55 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:24.371 02:57:55 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.371 02:57:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.371 02:57:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.371 02:57:55 -- common/autotest_common.sh@10 -- # set +x 00:03:24.371 ************************************ 00:03:24.371 START TEST setup.sh 00:03:24.371 ************************************ 00:03:24.371 02:57:55 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.372 * Looking for test storage... 00:03:24.372 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:24.372 02:57:55 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:24.372 02:57:55 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:24.372 02:57:55 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:24.372 02:57:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.372 02:57:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.372 02:57:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.372 ************************************ 00:03:24.372 START TEST acl 00:03:24.372 ************************************ 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:24.372 * Looking for test storage... 00:03:24.372 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ host-managed != none ]] 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1670 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:24.372 02:57:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:24.372 02:57:55 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:24.372 02:57:55 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.372 02:57:55 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.564 02:57:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.564 02:57:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.564 02:57:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.564 02:57:59 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.564 02:57:59 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.564 02:57:59 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:31.856 Hugepages 00:03:31.856 node hugesize free / total 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 00:03:31.856 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@21 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:31.856 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:31.857 02:58:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.857 02:58:02 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.857 02:58:02 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.857 02:58:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.857 ************************************ 00:03:31.857 START TEST denied 00:03:31.857 ************************************ 00:03:31.857 02:58:02 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:31.857 02:58:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:03:31.857 02:58:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:31.857 02:58:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:31.857 02:58:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.857 02:58:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:35.190 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.190 02:58:06 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.465 00:03:40.465 real 0m8.171s 00:03:40.465 user 0m2.747s 00:03:40.465 sys 0m4.632s 00:03:40.465 02:58:10 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.465 02:58:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:40.465 ************************************ 00:03:40.465 END TEST denied 00:03:40.465 ************************************ 00:03:40.465 02:58:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:40.465 02:58:10 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.465 02:58:10 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.465 02:58:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.465 ************************************ 00:03:40.465 START TEST allowed 00:03:40.465 ************************************ 00:03:40.465 02:58:10 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:40.465 02:58:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:40.465 02:58:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:40.465 02:58:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:40.465 02:58:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.465 02:58:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:44.659 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.659 02:58:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:44.659 02:58:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:44.659 02:58:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:44.659 02:58:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.659 02:58:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.954 00:03:47.954 real 0m7.880s 00:03:47.954 user 0m2.585s 00:03:47.954 sys 0m4.436s 00:03:47.954 02:58:18 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.954 02:58:18 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:47.954 ************************************ 00:03:47.954 END TEST allowed 00:03:47.954 ************************************ 00:03:47.954 00:03:47.954 real 0m23.463s 00:03:47.954 user 0m8.113s 00:03:47.954 sys 0m13.839s 00:03:47.954 02:58:18 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.954 02:58:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.954 ************************************ 00:03:47.954 END TEST acl 00:03:47.954 ************************************ 00:03:47.954 02:58:18 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.954 02:58:18 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.954 02:58:18 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.954 02:58:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.954 ************************************ 00:03:47.954 START TEST hugepages 00:03:47.954 ************************************ 00:03:47.954 02:58:18 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.954 * Looking for test storage... 00:03:47.954 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.954 02:58:18 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 69083268 kB' 'MemAvailable: 73741452 kB' 'Buffers: 3724 kB' 'Cached: 16296212 kB' 'SwapCached: 0 kB' 'Active: 12312996 kB' 'Inactive: 4487836 kB' 'Active(anon): 11689272 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504292 kB' 'Mapped: 192748 kB' 'Shmem: 11188376 kB' 'KReclaimable: 285700 kB' 'Slab: 789052 kB' 'SReclaimable: 285700 kB' 'SUnreclaim: 503352 kB' 'KernelStack: 19648 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52952936 kB' 'Committed_AS: 13064252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221052 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.955 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.956 02:58:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.956 02:58:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:47.956 02:58:19 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.956 02:58:19 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.956 02:58:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.956 ************************************ 00:03:47.956 START TEST default_setup 00:03:47.956 ************************************ 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.956 02:58:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:51.242 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:51.242 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.242 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.181 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71248568 kB' 'MemAvailable: 75906544 kB' 'Buffers: 3724 kB' 'Cached: 16296324 kB' 'SwapCached: 0 kB' 'Active: 12332300 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708576 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523612 kB' 'Mapped: 193024 kB' 'Shmem: 11188488 kB' 'KReclaimable: 285284 kB' 'Slab: 786668 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501384 kB' 'KernelStack: 19696 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13085020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221084 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.181 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:52.182 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71247460 kB' 'MemAvailable: 75905436 kB' 'Buffers: 3724 kB' 'Cached: 16296328 kB' 'SwapCached: 0 kB' 'Active: 12331744 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708020 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523136 kB' 'Mapped: 192852 kB' 'Shmem: 11188492 kB' 'KReclaimable: 285284 kB' 'Slab: 786656 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501372 kB' 'KernelStack: 19680 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13085036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221052 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.183 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.184 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71246704 kB' 'MemAvailable: 75904680 kB' 'Buffers: 3724 kB' 'Cached: 16296332 kB' 'SwapCached: 0 kB' 'Active: 12331432 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707708 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522812 kB' 'Mapped: 192852 kB' 'Shmem: 11188496 kB' 'KReclaimable: 285284 kB' 'Slab: 786656 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501372 kB' 'KernelStack: 19680 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13085060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221052 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.185 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.186 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.447 nr_hugepages=1024 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.447 resv_hugepages=0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.447 surplus_hugepages=0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.447 anon_hugepages=0 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.447 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71247676 kB' 'MemAvailable: 75905652 kB' 'Buffers: 3724 kB' 'Cached: 16296368 kB' 'SwapCached: 0 kB' 'Active: 12331804 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708080 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523140 kB' 'Mapped: 192852 kB' 'Shmem: 11188532 kB' 'KReclaimable: 285284 kB' 'Slab: 786656 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501372 kB' 'KernelStack: 19680 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13085080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221052 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.448 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.449 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 20202376 kB' 'MemUsed: 12432252 kB' 'SwapCached: 0 kB' 'Active: 7818256 kB' 'Inactive: 1118852 kB' 'Active(anon): 7511844 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8630848 kB' 'Mapped: 133100 kB' 'AnonPages: 309556 kB' 'Shmem: 7205584 kB' 'KernelStack: 11400 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 439996 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.450 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.451 node0=1024 expecting 1024 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.451 00:03:52.451 real 0m4.368s 00:03:52.451 user 0m1.369s 00:03:52.451 sys 0m2.087s 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.451 02:58:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:52.451 ************************************ 00:03:52.451 END TEST default_setup 00:03:52.451 ************************************ 00:03:52.451 02:58:23 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:52.451 02:58:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.451 02:58:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.451 02:58:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.451 ************************************ 00:03:52.451 START TEST per_node_1G_alloc 00:03:52.451 ************************************ 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.451 02:58:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:55.741 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:55.741 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.741 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.741 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71225848 kB' 'MemAvailable: 75883824 kB' 'Buffers: 3724 kB' 'Cached: 16296468 kB' 'SwapCached: 0 kB' 'Active: 12331300 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707576 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521720 kB' 'Mapped: 191784 kB' 'Shmem: 11188632 kB' 'KReclaimable: 285284 kB' 'Slab: 787036 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501752 kB' 'KernelStack: 19648 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13072120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221292 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.742 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71227092 kB' 'MemAvailable: 75885068 kB' 'Buffers: 3724 kB' 'Cached: 16296468 kB' 'SwapCached: 0 kB' 'Active: 12330452 kB' 'Inactive: 4487836 kB' 'Active(anon): 11706728 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521420 kB' 'Mapped: 191648 kB' 'Shmem: 11188632 kB' 'KReclaimable: 285284 kB' 'Slab: 786996 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501712 kB' 'KernelStack: 19616 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13072136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221260 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.744 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71227064 kB' 'MemAvailable: 75885040 kB' 'Buffers: 3724 kB' 'Cached: 16296488 kB' 'SwapCached: 0 kB' 'Active: 12330800 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707076 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521760 kB' 'Mapped: 191648 kB' 'Shmem: 11188652 kB' 'KReclaimable: 285284 kB' 'Slab: 786996 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501712 kB' 'KernelStack: 19616 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13072160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221260 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.008 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.009 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.010 nr_hugepages=1024 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.010 resv_hugepages=0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.010 surplus_hugepages=0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.010 anon_hugepages=0 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71228312 kB' 'MemAvailable: 75886288 kB' 'Buffers: 3724 kB' 'Cached: 16296512 kB' 'SwapCached: 0 kB' 'Active: 12331580 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707856 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522540 kB' 'Mapped: 191648 kB' 'Shmem: 11188676 kB' 'KReclaimable: 285284 kB' 'Slab: 786996 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501712 kB' 'KernelStack: 19648 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13085132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221292 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.010 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21232436 kB' 'MemUsed: 11402192 kB' 'SwapCached: 0 kB' 'Active: 7819696 kB' 'Inactive: 1118852 kB' 'Active(anon): 7513284 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8630876 kB' 'Mapped: 131896 kB' 'AnonPages: 310932 kB' 'Shmem: 7205612 kB' 'KernelStack: 11368 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 440384 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.012 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.013 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688340 kB' 'MemFree: 49995936 kB' 'MemUsed: 10692404 kB' 'SwapCached: 0 kB' 'Active: 4510376 kB' 'Inactive: 3368984 kB' 'Active(anon): 4193064 kB' 'Inactive(anon): 0 kB' 'Active(file): 317312 kB' 'Inactive(file): 3368984 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7669384 kB' 'Mapped: 59752 kB' 'AnonPages: 210068 kB' 'Shmem: 3983088 kB' 'KernelStack: 8232 kB' 'PageTables: 2992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121592 kB' 'Slab: 346612 kB' 'SReclaimable: 121592 kB' 'SUnreclaim: 225020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.015 node0=512 expecting 512 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.015 node1=512 expecting 512 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.015 00:03:56.015 real 0m3.530s 00:03:56.015 user 0m1.402s 00:03:56.015 sys 0m2.188s 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.015 02:58:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.015 ************************************ 00:03:56.015 END TEST per_node_1G_alloc 00:03:56.015 ************************************ 00:03:56.015 02:58:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.015 02:58:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.015 02:58:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.015 02:58:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.015 ************************************ 00:03:56.015 START TEST even_2G_alloc 00:03:56.015 ************************************ 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.015 02:58:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:59.370 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:59.370 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.370 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.370 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71249288 kB' 'MemAvailable: 75907264 kB' 'Buffers: 3724 kB' 'Cached: 16296644 kB' 'SwapCached: 0 kB' 'Active: 12332780 kB' 'Inactive: 4487836 kB' 'Active(anon): 11709056 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522872 kB' 'Mapped: 190824 kB' 'Shmem: 11188808 kB' 'KReclaimable: 285284 kB' 'Slab: 786448 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501164 kB' 'KernelStack: 19760 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13041320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221468 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.371 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71249792 kB' 'MemAvailable: 75907768 kB' 'Buffers: 3724 kB' 'Cached: 16296648 kB' 'SwapCached: 0 kB' 'Active: 12331260 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707536 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521528 kB' 'Mapped: 190820 kB' 'Shmem: 11188812 kB' 'KReclaimable: 285284 kB' 'Slab: 786272 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500988 kB' 'KernelStack: 19840 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13041336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221308 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.372 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.373 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71247008 kB' 'MemAvailable: 75904984 kB' 'Buffers: 3724 kB' 'Cached: 16296648 kB' 'SwapCached: 0 kB' 'Active: 12331236 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707512 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521960 kB' 'Mapped: 190736 kB' 'Shmem: 11188812 kB' 'KReclaimable: 285284 kB' 'Slab: 786288 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501004 kB' 'KernelStack: 20048 kB' 'PageTables: 10308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13041360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221324 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.374 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.375 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.376 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.639 nr_hugepages=1024 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.639 resv_hugepages=0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.639 surplus_hugepages=0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.639 anon_hugepages=0 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.639 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71252284 kB' 'MemAvailable: 75910260 kB' 'Buffers: 3724 kB' 'Cached: 16296648 kB' 'SwapCached: 0 kB' 'Active: 12330980 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707256 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521736 kB' 'Mapped: 190736 kB' 'Shmem: 11188812 kB' 'KReclaimable: 285284 kB' 'Slab: 786192 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500908 kB' 'KernelStack: 20016 kB' 'PageTables: 10396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221244 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.640 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.641 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21249588 kB' 'MemUsed: 11385040 kB' 'SwapCached: 0 kB' 'Active: 7819804 kB' 'Inactive: 1118852 kB' 'Active(anon): 7513392 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8630928 kB' 'Mapped: 131264 kB' 'AnonPages: 310900 kB' 'Shmem: 7205664 kB' 'KernelStack: 11320 kB' 'PageTables: 5560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 439824 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.642 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688340 kB' 'MemFree: 50001676 kB' 'MemUsed: 10686664 kB' 'SwapCached: 0 kB' 'Active: 4510156 kB' 'Inactive: 3368984 kB' 'Active(anon): 4192844 kB' 'Inactive(anon): 0 kB' 'Active(file): 317312 kB' 'Inactive(file): 3368984 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7669452 kB' 'Mapped: 59976 kB' 'AnonPages: 209764 kB' 'Shmem: 3983156 kB' 'KernelStack: 8248 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121592 kB' 'Slab: 346336 kB' 'SReclaimable: 121592 kB' 'SUnreclaim: 224744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.643 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.644 node0=512 expecting 512 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.644 node1=512 expecting 512 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.644 00:03:59.644 real 0m3.533s 00:03:59.644 user 0m1.427s 00:03:59.644 sys 0m2.140s 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.644 02:58:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.644 ************************************ 00:03:59.644 END TEST even_2G_alloc 00:03:59.644 ************************************ 00:03:59.644 02:58:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:59.644 02:58:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.644 02:58:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.644 02:58:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.644 ************************************ 00:03:59.644 START TEST odd_alloc 00:03:59.644 ************************************ 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.644 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.645 02:58:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:02.939 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:02.939 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.939 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.939 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71231816 kB' 'MemAvailable: 75889792 kB' 'Buffers: 3724 kB' 'Cached: 16296800 kB' 'SwapCached: 0 kB' 'Active: 12329096 kB' 'Inactive: 4487836 kB' 'Active(anon): 11705372 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519704 kB' 'Mapped: 190808 kB' 'Shmem: 11188964 kB' 'KReclaimable: 285284 kB' 'Slab: 787024 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501740 kB' 'KernelStack: 19520 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000488 kB' 'Committed_AS: 13039208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221212 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.939 02:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.939 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.940 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71232216 kB' 'MemAvailable: 75890192 kB' 'Buffers: 3724 kB' 'Cached: 16296800 kB' 'SwapCached: 0 kB' 'Active: 12328648 kB' 'Inactive: 4487836 kB' 'Active(anon): 11704924 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519244 kB' 'Mapped: 190748 kB' 'Shmem: 11188964 kB' 'KReclaimable: 285284 kB' 'Slab: 787068 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501784 kB' 'KernelStack: 19488 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000488 kB' 'Committed_AS: 13039224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221180 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.941 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.942 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71232216 kB' 'MemAvailable: 75890192 kB' 'Buffers: 3724 kB' 'Cached: 16296804 kB' 'SwapCached: 0 kB' 'Active: 12328836 kB' 'Inactive: 4487836 kB' 'Active(anon): 11705112 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519424 kB' 'Mapped: 190748 kB' 'Shmem: 11188968 kB' 'KReclaimable: 285284 kB' 'Slab: 787068 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501784 kB' 'KernelStack: 19488 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000488 kB' 'Committed_AS: 13039244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221180 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.943 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.944 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.944 nr_hugepages=1025 00:04:02.945 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.945 resv_hugepages=0 00:04:02.945 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.945 surplus_hugepages=0 00:04:02.945 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.945 anon_hugepages=0 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71232156 kB' 'MemAvailable: 75890132 kB' 'Buffers: 3724 kB' 'Cached: 16296856 kB' 'SwapCached: 0 kB' 'Active: 12328472 kB' 'Inactive: 4487836 kB' 'Active(anon): 11704748 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519008 kB' 'Mapped: 190748 kB' 'Shmem: 11189020 kB' 'KReclaimable: 285284 kB' 'Slab: 787068 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501784 kB' 'KernelStack: 19488 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000488 kB' 'Committed_AS: 13039268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221180 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.207 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21251756 kB' 'MemUsed: 11382872 kB' 'SwapCached: 0 kB' 'Active: 7819752 kB' 'Inactive: 1118852 kB' 'Active(anon): 7513340 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8631008 kB' 'Mapped: 131268 kB' 'AnonPages: 310856 kB' 'Shmem: 7205744 kB' 'KernelStack: 11384 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 440404 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.209 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688340 kB' 'MemFree: 49980400 kB' 'MemUsed: 10707940 kB' 'SwapCached: 0 kB' 'Active: 4508968 kB' 'Inactive: 3368984 kB' 'Active(anon): 4191656 kB' 'Inactive(anon): 0 kB' 'Active(file): 317312 kB' 'Inactive(file): 3368984 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7669572 kB' 'Mapped: 59480 kB' 'AnonPages: 208400 kB' 'Shmem: 3983276 kB' 'KernelStack: 8104 kB' 'PageTables: 2760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121592 kB' 'Slab: 346664 kB' 'SReclaimable: 121592 kB' 'SUnreclaim: 225072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.210 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.211 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:03.212 node0=512 expecting 513 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:03.212 node1=513 expecting 512 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.212 00:04:03.212 real 0m3.468s 00:04:03.212 user 0m1.359s 00:04:03.212 sys 0m2.156s 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.212 02:58:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.212 ************************************ 00:04:03.212 END TEST odd_alloc 00:04:03.212 ************************************ 00:04:03.212 02:58:34 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:03.212 02:58:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:03.212 02:58:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:03.212 02:58:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.212 ************************************ 00:04:03.212 START TEST custom_alloc 00:04:03.212 ************************************ 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.212 02:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:05.748 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:06.007 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.007 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.007 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.275 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 70185636 kB' 'MemAvailable: 74843612 kB' 'Buffers: 3724 kB' 'Cached: 16296952 kB' 'SwapCached: 0 kB' 'Active: 12330416 kB' 'Inactive: 4487836 kB' 'Active(anon): 11706692 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520936 kB' 'Mapped: 190816 kB' 'Shmem: 11189116 kB' 'KReclaimable: 285284 kB' 'Slab: 786436 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501152 kB' 'KernelStack: 19520 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477224 kB' 'Committed_AS: 13039880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221196 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.276 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 70189588 kB' 'MemAvailable: 74847564 kB' 'Buffers: 3724 kB' 'Cached: 16296956 kB' 'SwapCached: 0 kB' 'Active: 12329992 kB' 'Inactive: 4487836 kB' 'Active(anon): 11706268 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520508 kB' 'Mapped: 190760 kB' 'Shmem: 11189120 kB' 'KReclaimable: 285284 kB' 'Slab: 786476 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501192 kB' 'KernelStack: 19456 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477224 kB' 'Committed_AS: 13041040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221148 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.277 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.278 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 70191068 kB' 'MemAvailable: 74849044 kB' 'Buffers: 3724 kB' 'Cached: 16296960 kB' 'SwapCached: 0 kB' 'Active: 12330188 kB' 'Inactive: 4487836 kB' 'Active(anon): 11706464 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520208 kB' 'Mapped: 190760 kB' 'Shmem: 11189124 kB' 'KReclaimable: 285284 kB' 'Slab: 786452 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501168 kB' 'KernelStack: 19440 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477224 kB' 'Committed_AS: 13041064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221132 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.279 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.280 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:06.281 nr_hugepages=1536 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.281 resv_hugepages=0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.281 surplus_hugepages=0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.281 anon_hugepages=0 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 70188800 kB' 'MemAvailable: 74846776 kB' 'Buffers: 3724 kB' 'Cached: 16296992 kB' 'SwapCached: 0 kB' 'Active: 12330740 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707016 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520916 kB' 'Mapped: 190760 kB' 'Shmem: 11189156 kB' 'KReclaimable: 285284 kB' 'Slab: 786452 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 501168 kB' 'KernelStack: 19584 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477224 kB' 'Committed_AS: 13042384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221292 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.282 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21249912 kB' 'MemUsed: 11384716 kB' 'SwapCached: 0 kB' 'Active: 7819476 kB' 'Inactive: 1118852 kB' 'Active(anon): 7513064 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8631144 kB' 'Mapped: 131280 kB' 'AnonPages: 309820 kB' 'Shmem: 7205880 kB' 'KernelStack: 11320 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 439728 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.284 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688340 kB' 'MemFree: 48937864 kB' 'MemUsed: 11750476 kB' 'SwapCached: 0 kB' 'Active: 4511564 kB' 'Inactive: 3368984 kB' 'Active(anon): 4194252 kB' 'Inactive(anon): 0 kB' 'Active(file): 317312 kB' 'Inactive(file): 3368984 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7669572 kB' 'Mapped: 59480 kB' 'AnonPages: 211076 kB' 'Shmem: 3983276 kB' 'KernelStack: 8392 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121592 kB' 'Slab: 346724 kB' 'SReclaimable: 121592 kB' 'SUnreclaim: 225132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.544 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.545 node0=512 expecting 512 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:06.545 node1=1024 expecting 1024 00:04:06.545 02:58:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:06.545 00:04:06.545 real 0m3.210s 00:04:06.545 user 0m1.260s 00:04:06.545 sys 0m1.979s 00:04:06.546 02:58:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.546 02:58:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.546 ************************************ 00:04:06.546 END TEST custom_alloc 00:04:06.546 ************************************ 00:04:06.546 02:58:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.546 02:58:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.546 02:58:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.546 02:58:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.546 ************************************ 00:04:06.546 START TEST no_shrink_alloc 00:04:06.546 ************************************ 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.546 02:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:09.081 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:09.342 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.342 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.342 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.342 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.342 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.342 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71265016 kB' 'MemAvailable: 75922992 kB' 'Buffers: 3724 kB' 'Cached: 16297096 kB' 'SwapCached: 0 kB' 'Active: 12332104 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708380 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522460 kB' 'Mapped: 190844 kB' 'Shmem: 11189260 kB' 'KReclaimable: 285284 kB' 'Slab: 786036 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500752 kB' 'KernelStack: 19664 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221196 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.343 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.344 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71264796 kB' 'MemAvailable: 75922772 kB' 'Buffers: 3724 kB' 'Cached: 16297100 kB' 'SwapCached: 0 kB' 'Active: 12332780 kB' 'Inactive: 4487836 kB' 'Active(anon): 11709056 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523208 kB' 'Mapped: 190844 kB' 'Shmem: 11189264 kB' 'KReclaimable: 285284 kB' 'Slab: 785980 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500696 kB' 'KernelStack: 19680 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221180 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.345 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.346 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.609 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71264996 kB' 'MemAvailable: 75922972 kB' 'Buffers: 3724 kB' 'Cached: 16297116 kB' 'SwapCached: 0 kB' 'Active: 12332332 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708608 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522748 kB' 'Mapped: 190784 kB' 'Shmem: 11189280 kB' 'KReclaimable: 285284 kB' 'Slab: 786044 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500760 kB' 'KernelStack: 19552 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13054664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221180 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.610 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.611 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.612 nr_hugepages=1024 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.612 resv_hugepages=0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.612 surplus_hugepages=0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.612 anon_hugepages=0 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71263568 kB' 'MemAvailable: 75921544 kB' 'Buffers: 3724 kB' 'Cached: 16297140 kB' 'SwapCached: 0 kB' 'Active: 12331572 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707848 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521928 kB' 'Mapped: 190784 kB' 'Shmem: 11189304 kB' 'KReclaimable: 285284 kB' 'Slab: 786044 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500760 kB' 'KernelStack: 19536 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13039980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221100 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.612 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.613 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 20207668 kB' 'MemUsed: 12426960 kB' 'SwapCached: 0 kB' 'Active: 7820004 kB' 'Inactive: 1118852 kB' 'Active(anon): 7513592 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8631248 kB' 'Mapped: 131296 kB' 'AnonPages: 310836 kB' 'Shmem: 7205984 kB' 'KernelStack: 11352 kB' 'PageTables: 5524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 439564 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 275872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.614 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.615 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.616 node0=1024 expecting 1024 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.616 02:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:12.910 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:12.910 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.910 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.910 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.910 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.910 02:58:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71246824 kB' 'MemAvailable: 75904800 kB' 'Buffers: 3724 kB' 'Cached: 16297236 kB' 'SwapCached: 0 kB' 'Active: 12332388 kB' 'Inactive: 4487836 kB' 'Active(anon): 11708664 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522100 kB' 'Mapped: 190876 kB' 'Shmem: 11189400 kB' 'KReclaimable: 285284 kB' 'Slab: 786116 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500832 kB' 'KernelStack: 19536 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13055540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221212 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.910 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.911 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71245924 kB' 'MemAvailable: 75903900 kB' 'Buffers: 3724 kB' 'Cached: 16297240 kB' 'SwapCached: 0 kB' 'Active: 12331480 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707756 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521716 kB' 'Mapped: 190784 kB' 'Shmem: 11189404 kB' 'KReclaimable: 285284 kB' 'Slab: 786064 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500780 kB' 'KernelStack: 19504 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221196 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.912 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.913 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71245448 kB' 'MemAvailable: 75903424 kB' 'Buffers: 3724 kB' 'Cached: 16297256 kB' 'SwapCached: 0 kB' 'Active: 12331684 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707960 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521860 kB' 'Mapped: 190784 kB' 'Shmem: 11189420 kB' 'KReclaimable: 285284 kB' 'Slab: 786064 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500780 kB' 'KernelStack: 19488 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221196 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.176 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.177 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.178 nr_hugepages=1024 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.178 resv_hugepages=0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.178 surplus_hugepages=0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.178 anon_hugepages=0 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322968 kB' 'MemFree: 71244792 kB' 'MemAvailable: 75902768 kB' 'Buffers: 3724 kB' 'Cached: 16297280 kB' 'SwapCached: 0 kB' 'Active: 12331400 kB' 'Inactive: 4487836 kB' 'Active(anon): 11707676 kB' 'Inactive(anon): 0 kB' 'Active(file): 623724 kB' 'Inactive(file): 4487836 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521516 kB' 'Mapped: 190784 kB' 'Shmem: 11189444 kB' 'KReclaimable: 285284 kB' 'Slab: 786064 kB' 'SReclaimable: 285284 kB' 'SUnreclaim: 500780 kB' 'KernelStack: 19456 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001512 kB' 'Committed_AS: 13040636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221196 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2610132 kB' 'DirectMap2M: 30623744 kB' 'DirectMap1G: 69206016 kB' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.178 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.179 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 20212204 kB' 'MemUsed: 12422424 kB' 'SwapCached: 0 kB' 'Active: 7821776 kB' 'Inactive: 1118852 kB' 'Active(anon): 7515364 kB' 'Inactive(anon): 0 kB' 'Active(file): 306412 kB' 'Inactive(file): 1118852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8631340 kB' 'Mapped: 131304 kB' 'AnonPages: 312572 kB' 'Shmem: 7206076 kB' 'KernelStack: 11368 kB' 'PageTables: 5532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163692 kB' 'Slab: 439880 kB' 'SReclaimable: 163692 kB' 'SUnreclaim: 276188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.181 node0=1024 expecting 1024 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.181 00:04:13.181 real 0m6.613s 00:04:13.181 user 0m2.534s 00:04:13.181 sys 0m3.963s 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.181 02:58:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.181 ************************************ 00:04:13.181 END TEST no_shrink_alloc 00:04:13.181 ************************************ 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.181 02:58:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.181 00:04:13.181 real 0m25.323s 00:04:13.181 user 0m9.595s 00:04:13.181 sys 0m14.891s 00:04:13.181 02:58:44 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.181 02:58:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.181 ************************************ 00:04:13.181 END TEST hugepages 00:04:13.181 ************************************ 00:04:13.181 02:58:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:13.181 02:58:44 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.181 02:58:44 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.182 02:58:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.182 ************************************ 00:04:13.182 START TEST driver 00:04:13.182 ************************************ 00:04:13.182 02:58:44 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:13.441 * Looking for test storage... 00:04:13.441 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:13.441 02:58:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:13.441 02:58:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.441 02:58:44 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.715 02:58:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:18.715 02:58:49 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.715 02:58:49 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.715 02:58:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.715 ************************************ 00:04:18.715 START TEST guess_driver 00:04:18.715 ************************************ 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 220 > 0 )) 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:18.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:18.715 Looking for driver=vfio-pci 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.715 02:58:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:21.283 02:58:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:04:21.284 02:58:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:21.284 02:58:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.284 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.543 02:58:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.479 02:58:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.749 00:04:27.749 real 0m8.953s 00:04:27.749 user 0m2.721s 00:04:27.749 sys 0m4.611s 00:04:27.749 02:58:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.749 02:58:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.749 ************************************ 00:04:27.749 END TEST guess_driver 00:04:27.749 ************************************ 00:04:27.749 00:04:27.749 real 0m13.807s 00:04:27.749 user 0m4.091s 00:04:27.749 sys 0m7.286s 00:04:27.749 02:58:58 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.749 02:58:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.749 ************************************ 00:04:27.749 END TEST driver 00:04:27.749 ************************************ 00:04:27.749 02:58:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:27.749 02:58:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.749 02:58:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.749 02:58:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.749 ************************************ 00:04:27.749 START TEST devices 00:04:27.749 ************************************ 00:04:27.749 02:58:58 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:27.749 * Looking for test storage... 00:04:27.749 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:27.749 02:58:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:27.749 02:58:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:27.749 02:58:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.749 02:58:58 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ host-managed != none ]] 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1670 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:31.039 02:59:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:31.039 02:59:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:31.040 02:59:01 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:31.040 No valid GPT data, bailing 00:04:31.040 02:59:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.040 02:59:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:31.040 02:59:01 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:31.040 02:59:01 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:31.040 02:59:01 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.040 02:59:01 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.040 02:59:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 ************************************ 00:04:31.040 START TEST nvme_mount 00:04:31.040 ************************************ 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:31.040 02:59:02 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:31.978 Creating new GPT entries in memory. 00:04:31.978 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.978 other utilities. 00:04:31.978 02:59:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.978 02:59:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.978 02:59:03 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.978 02:59:03 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.978 02:59:03 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:2099199 00:04:33.356 Creating new GPT entries in memory. 00:04:33.356 The operation has completed successfully. 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3974594 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme1n1:nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.356 02:59:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:35.890 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.891 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:36.457 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:36.457 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:36.946 /dev/nvme1n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:36.946 /dev/nvme1n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:36.946 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.946 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme1n1:nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.946 02:59:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:40.236 02:59:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.236 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme1n1 '' '' 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.237 02:59:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:43.525 02:59:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:43.525 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.525 00:04:43.525 real 0m12.506s 00:04:43.525 user 0m3.883s 00:04:43.525 sys 0m6.376s 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.525 02:59:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.525 ************************************ 00:04:43.525 END TEST nvme_mount 00:04:43.525 ************************************ 00:04:43.525 02:59:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:43.525 02:59:14 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.525 02:59:14 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.525 02:59:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.525 ************************************ 00:04:43.525 START TEST dm_mount 00:04:43.525 ************************************ 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:43.525 02:59:14 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:44.502 Creating new GPT entries in memory. 00:04:44.502 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.502 other utilities. 00:04:44.502 02:59:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.502 02:59:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.502 02:59:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.502 02:59:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.502 02:59:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:2099199 00:04:45.880 Creating new GPT entries in memory. 00:04:45.880 The operation has completed successfully. 00:04:45.880 02:59:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.880 02:59:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.880 02:59:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.880 02:59:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.880 02:59:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:2099200:4196351 00:04:46.815 The operation has completed successfully. 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3979341 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme1n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.815 02:59:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:49.348 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.348 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.607 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.867 02:59:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.155 02:59:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:53.155 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:53.155 00:04:53.155 real 0m9.543s 00:04:53.155 user 0m2.328s 00:04:53.155 sys 0m4.033s 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.155 02:59:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.155 ************************************ 00:04:53.155 END TEST dm_mount 00:04:53.155 ************************************ 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:53.155 02:59:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:53.415 /dev/nvme1n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.415 /dev/nvme1n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.415 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.415 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:53.415 02:59:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:53.415 00:04:53.415 real 0m26.336s 00:04:53.415 user 0m7.754s 00:04:53.415 sys 0m13.018s 00:04:53.415 02:59:24 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.415 02:59:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.415 ************************************ 00:04:53.415 END TEST devices 00:04:53.415 ************************************ 00:04:53.415 00:04:53.415 real 1m29.321s 00:04:53.415 user 0m29.705s 00:04:53.415 sys 0m49.291s 00:04:53.415 02:59:24 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.415 02:59:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.415 ************************************ 00:04:53.415 END TEST setup.sh 00:04:53.415 ************************************ 00:04:53.415 02:59:24 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:56.701 Hugepages 00:04:56.701 node hugesize free / total 00:04:56.701 node0 1048576kB 0 / 0 00:04:56.701 node0 2048kB 1024 / 1024 00:04:56.701 node1 1048576kB 0 / 0 00:04:56.701 node1 2048kB 1024 / 1024 00:04:56.701 00:04:56.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:56.701 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:56.701 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:56.959 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:04:56.959 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme0 nvme0n1 nvme0n2 00:04:56.960 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:56.960 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:56.960 02:59:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:56.960 02:59:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:56.960 02:59:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:56.960 02:59:28 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:00.248 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:00.248 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.248 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:01.184 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.185 02:59:32 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:02.562 02:59:33 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:02.562 02:59:33 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:02.562 02:59:33 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:02.562 02:59:33 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:02.562 02:59:33 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:02.562 02:59:33 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:02.562 02:59:33 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.562 02:59:33 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.562 02:59:33 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:02.562 02:59:33 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:02.562 02:59:33 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:05:02.562 02:59:33 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.098 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:05.737 Waiting for block devices as requested 00:05:05.737 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.737 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.737 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.737 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.996 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.996 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.996 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.255 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.255 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.255 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:06.515 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:06.515 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:06.515 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:06.515 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.773 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.773 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.773 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:07.031 02:59:37 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:07.031 02:59:37 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:07.031 02:59:37 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.031 02:59:37 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:05:07.032 02:59:37 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 ]] 00:05:07.032 02:59:37 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:07.032 02:59:37 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:07.032 02:59:37 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:07.032 02:59:37 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:07.032 02:59:37 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:07.032 02:59:37 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:07.032 02:59:37 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:07.032 02:59:37 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:07.032 02:59:37 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:07.032 02:59:37 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:07.032 02:59:37 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:07.032 02:59:37 -- common/autotest_common.sh@1553 -- # continue 00:05:07.032 02:59:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:07.032 02:59:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.032 02:59:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.032 02:59:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:07.032 02:59:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:07.032 02:59:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.032 02:59:38 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:10.314 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:10.314 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.314 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.249 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.249 02:59:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:11.249 02:59:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.249 02:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.249 02:59:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:11.249 02:59:42 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:11.249 02:59:42 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.249 02:59:42 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:11.249 02:59:42 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:11.249 02:59:42 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:11.249 02:59:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:11.249 02:59:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:11.249 02:59:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.507 02:59:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.507 02:59:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:11.507 02:59:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:11.507 02:59:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:05:11.507 02:59:42 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.507 02:59:42 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:11.507 02:59:42 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:11.507 02:59:42 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.507 02:59:42 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:11.507 02:59:42 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5e:00.0 00:05:11.507 02:59:42 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5e:00.0 ]] 00:05:11.507 02:59:42 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3989507 00:05:11.507 02:59:42 -- common/autotest_common.sh@1594 -- # waitforlisten 3989507 00:05:11.507 02:59:42 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.507 02:59:42 -- common/autotest_common.sh@827 -- # '[' -z 3989507 ']' 00:05:11.507 02:59:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.507 02:59:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.507 02:59:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.507 02:59:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.507 02:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 [2024-05-15 02:59:42.562329] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:11.507 [2024-05-15 02:59:42.562387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989507 ] 00:05:11.507 [2024-05-15 02:59:42.652801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.766 [2024-05-15 02:59:42.743096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.700 02:59:43 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.700 02:59:43 -- common/autotest_common.sh@860 -- # return 0 00:05:12.700 02:59:43 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:12.700 02:59:43 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:12.700 02:59:43 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:15.986 nvme0n1 00:05:15.986 02:59:46 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:15.986 [2024-05-15 02:59:46.830539] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:15.986 [2024-05-15 02:59:46.830580] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:15.986 request: 00:05:15.986 { 00:05:15.986 "nvme_ctrlr_name": "nvme0", 00:05:15.986 "password": "test", 00:05:15.986 "method": "bdev_nvme_opal_revert", 00:05:15.986 "req_id": 1 00:05:15.986 } 00:05:15.986 Got JSON-RPC error response 00:05:15.986 response: 00:05:15.986 { 00:05:15.986 "code": -32603, 00:05:15.986 "message": "Internal error" 00:05:15.986 } 00:05:15.986 02:59:46 -- common/autotest_common.sh@1600 -- # true 00:05:15.986 02:59:46 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:15.986 02:59:46 -- common/autotest_common.sh@1604 -- # killprocess 3989507 00:05:15.986 02:59:46 -- common/autotest_common.sh@946 -- # '[' -z 3989507 ']' 00:05:15.986 02:59:46 -- common/autotest_common.sh@950 -- # kill -0 3989507 00:05:15.986 02:59:46 -- common/autotest_common.sh@951 -- # uname 00:05:15.986 02:59:46 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.986 02:59:46 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3989507 00:05:15.986 02:59:46 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.986 02:59:46 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.986 02:59:46 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3989507' 00:05:15.986 killing process with pid 3989507 00:05:15.986 02:59:46 -- common/autotest_common.sh@965 -- # kill 3989507 00:05:15.986 02:59:46 -- common/autotest_common.sh@970 -- # wait 3989507 00:05:17.886 02:59:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:17.886 02:59:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:17.886 02:59:48 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:17.886 02:59:48 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:17.886 02:59:48 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:18.145 Restarting all devices. 00:05:21.431 lstat() error: No such file or directory 00:05:21.431 QAT Error: No GENERAL section found 00:05:21.431 Failed to configure qat_dev0 00:05:21.431 lstat() error: No such file or directory 00:05:21.431 QAT Error: No GENERAL section found 00:05:21.431 Failed to configure qat_dev1 00:05:21.431 lstat() error: No such file or directory 00:05:21.431 QAT Error: No GENERAL section found 00:05:21.431 Failed to configure qat_dev2 00:05:21.431 enable sriov 00:05:21.431 Checking status of all devices. 00:05:21.431 There is 3 QAT acceleration device(s) in the system: 00:05:21.431 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:21.431 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:21.431 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:22.366 0000:1a:00.0 set to 16 VFs 00:05:23.300 0000:1c:00.0 set to 16 VFs 00:05:23.867 0000:1e:00.0 set to 16 VFs 00:05:25.242 Properly configured the qat device with driver uio_pci_generic. 00:05:25.242 02:59:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:25.242 02:59:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:25.242 02:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 02:59:56 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:25.242 02:59:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.242 02:59:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.242 02:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 ************************************ 00:05:25.242 START TEST env 00:05:25.242 ************************************ 00:05:25.242 02:59:56 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:25.561 * Looking for test storage... 00:05:25.561 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:25.561 02:59:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.561 02:59:56 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.561 02:59:56 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.561 02:59:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.561 ************************************ 00:05:25.561 START TEST env_memory 00:05:25.561 ************************************ 00:05:25.561 02:59:56 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.561 00:05:25.561 00:05:25.561 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.561 http://cunit.sourceforge.net/ 00:05:25.561 00:05:25.561 00:05:25.561 Suite: memory 00:05:25.561 Test: alloc and free memory map ...[2024-05-15 02:59:56.525538] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.561 passed 00:05:25.561 Test: mem map translation ...[2024-05-15 02:59:56.556038] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.561 [2024-05-15 02:59:56.556059] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.561 [2024-05-15 02:59:56.556114] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.561 [2024-05-15 02:59:56.556125] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.561 passed 00:05:25.561 Test: mem map registration ...[2024-05-15 02:59:56.618809] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:25.561 [2024-05-15 02:59:56.618830] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:25.561 passed 00:05:25.821 Test: mem map adjacent registrations ...passed 00:05:25.821 00:05:25.821 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.821 suites 1 1 n/a 0 0 00:05:25.821 tests 4 4 4 0 0 00:05:25.821 asserts 152 152 152 0 n/a 00:05:25.821 00:05:25.821 Elapsed time = 0.213 seconds 00:05:25.821 00:05:25.821 real 0m0.225s 00:05:25.821 user 0m0.216s 00:05:25.821 sys 0m0.008s 00:05:25.821 02:59:56 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.821 02:59:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.821 ************************************ 00:05:25.821 END TEST env_memory 00:05:25.821 ************************************ 00:05:25.821 02:59:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.821 02:59:56 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.821 02:59:56 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.821 02:59:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.821 ************************************ 00:05:25.821 START TEST env_vtophys 00:05:25.821 ************************************ 00:05:25.821 02:59:56 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.821 EAL: lib.eal log level changed from notice to debug 00:05:25.821 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.821 EAL: Detected lcore 1 as core 1 on socket 0 00:05:25.821 EAL: Detected lcore 2 as core 2 on socket 0 00:05:25.821 EAL: Detected lcore 3 as core 3 on socket 0 00:05:25.821 EAL: Detected lcore 4 as core 4 on socket 0 00:05:25.821 EAL: Detected lcore 5 as core 5 on socket 0 00:05:25.821 EAL: Detected lcore 6 as core 6 on socket 0 00:05:25.821 EAL: Detected lcore 7 as core 8 on socket 0 00:05:25.821 EAL: Detected lcore 8 as core 9 on socket 0 00:05:25.821 EAL: Detected lcore 9 as core 10 on socket 0 00:05:25.821 EAL: Detected lcore 10 as core 11 on socket 0 00:05:25.821 EAL: Detected lcore 11 as core 12 on socket 0 00:05:25.821 EAL: Detected lcore 12 as core 13 on socket 0 00:05:25.821 EAL: Detected lcore 13 as core 16 on socket 0 00:05:25.821 EAL: Detected lcore 14 as core 17 on socket 0 00:05:25.821 EAL: Detected lcore 15 as core 18 on socket 0 00:05:25.821 EAL: Detected lcore 16 as core 19 on socket 0 00:05:25.821 EAL: Detected lcore 17 as core 20 on socket 0 00:05:25.821 EAL: Detected lcore 18 as core 21 on socket 0 00:05:25.821 EAL: Detected lcore 19 as core 25 on socket 0 00:05:25.821 EAL: Detected lcore 20 as core 26 on socket 0 00:05:25.821 EAL: Detected lcore 21 as core 27 on socket 0 00:05:25.821 EAL: Detected lcore 22 as core 28 on socket 0 00:05:25.821 EAL: Detected lcore 23 as core 29 on socket 0 00:05:25.821 EAL: Detected lcore 24 as core 0 on socket 1 00:05:25.821 EAL: Detected lcore 25 as core 1 on socket 1 00:05:25.821 EAL: Detected lcore 26 as core 2 on socket 1 00:05:25.821 EAL: Detected lcore 27 as core 3 on socket 1 00:05:25.821 EAL: Detected lcore 28 as core 4 on socket 1 00:05:25.821 EAL: Detected lcore 29 as core 5 on socket 1 00:05:25.821 EAL: Detected lcore 30 as core 6 on socket 1 00:05:25.821 EAL: Detected lcore 31 as core 8 on socket 1 00:05:25.821 EAL: Detected lcore 32 as core 9 on socket 1 00:05:25.821 EAL: Detected lcore 33 as core 10 on socket 1 00:05:25.821 EAL: Detected lcore 34 as core 11 on socket 1 00:05:25.821 EAL: Detected lcore 35 as core 12 on socket 1 00:05:25.821 EAL: Detected lcore 36 as core 13 on socket 1 00:05:25.821 EAL: Detected lcore 37 as core 16 on socket 1 00:05:25.821 EAL: Detected lcore 38 as core 17 on socket 1 00:05:25.821 EAL: Detected lcore 39 as core 18 on socket 1 00:05:25.821 EAL: Detected lcore 40 as core 19 on socket 1 00:05:25.821 EAL: Detected lcore 41 as core 20 on socket 1 00:05:25.821 EAL: Detected lcore 42 as core 21 on socket 1 00:05:25.821 EAL: Detected lcore 43 as core 25 on socket 1 00:05:25.821 EAL: Detected lcore 44 as core 26 on socket 1 00:05:25.821 EAL: Detected lcore 45 as core 27 on socket 1 00:05:25.821 EAL: Detected lcore 46 as core 28 on socket 1 00:05:25.821 EAL: Detected lcore 47 as core 29 on socket 1 00:05:25.821 EAL: Detected lcore 48 as core 0 on socket 0 00:05:25.821 EAL: Detected lcore 49 as core 1 on socket 0 00:05:25.821 EAL: Detected lcore 50 as core 2 on socket 0 00:05:25.821 EAL: Detected lcore 51 as core 3 on socket 0 00:05:25.821 EAL: Detected lcore 52 as core 4 on socket 0 00:05:25.821 EAL: Detected lcore 53 as core 5 on socket 0 00:05:25.821 EAL: Detected lcore 54 as core 6 on socket 0 00:05:25.821 EAL: Detected lcore 55 as core 8 on socket 0 00:05:25.821 EAL: Detected lcore 56 as core 9 on socket 0 00:05:25.821 EAL: Detected lcore 57 as core 10 on socket 0 00:05:25.821 EAL: Detected lcore 58 as core 11 on socket 0 00:05:25.821 EAL: Detected lcore 59 as core 12 on socket 0 00:05:25.821 EAL: Detected lcore 60 as core 13 on socket 0 00:05:25.821 EAL: Detected lcore 61 as core 16 on socket 0 00:05:25.821 EAL: Detected lcore 62 as core 17 on socket 0 00:05:25.821 EAL: Detected lcore 63 as core 18 on socket 0 00:05:25.821 EAL: Detected lcore 64 as core 19 on socket 0 00:05:25.821 EAL: Detected lcore 65 as core 20 on socket 0 00:05:25.821 EAL: Detected lcore 66 as core 21 on socket 0 00:05:25.821 EAL: Detected lcore 67 as core 25 on socket 0 00:05:25.821 EAL: Detected lcore 68 as core 26 on socket 0 00:05:25.821 EAL: Detected lcore 69 as core 27 on socket 0 00:05:25.821 EAL: Detected lcore 70 as core 28 on socket 0 00:05:25.821 EAL: Detected lcore 71 as core 29 on socket 0 00:05:25.821 EAL: Detected lcore 72 as core 0 on socket 1 00:05:25.821 EAL: Detected lcore 73 as core 1 on socket 1 00:05:25.821 EAL: Detected lcore 74 as core 2 on socket 1 00:05:25.821 EAL: Detected lcore 75 as core 3 on socket 1 00:05:25.821 EAL: Detected lcore 76 as core 4 on socket 1 00:05:25.821 EAL: Detected lcore 77 as core 5 on socket 1 00:05:25.821 EAL: Detected lcore 78 as core 6 on socket 1 00:05:25.821 EAL: Detected lcore 79 as core 8 on socket 1 00:05:25.821 EAL: Detected lcore 80 as core 9 on socket 1 00:05:25.821 EAL: Detected lcore 81 as core 10 on socket 1 00:05:25.821 EAL: Detected lcore 82 as core 11 on socket 1 00:05:25.821 EAL: Detected lcore 83 as core 12 on socket 1 00:05:25.821 EAL: Detected lcore 84 as core 13 on socket 1 00:05:25.821 EAL: Detected lcore 85 as core 16 on socket 1 00:05:25.821 EAL: Detected lcore 86 as core 17 on socket 1 00:05:25.821 EAL: Detected lcore 87 as core 18 on socket 1 00:05:25.821 EAL: Detected lcore 88 as core 19 on socket 1 00:05:25.821 EAL: Detected lcore 89 as core 20 on socket 1 00:05:25.821 EAL: Detected lcore 90 as core 21 on socket 1 00:05:25.821 EAL: Detected lcore 91 as core 25 on socket 1 00:05:25.821 EAL: Detected lcore 92 as core 26 on socket 1 00:05:25.821 EAL: Detected lcore 93 as core 27 on socket 1 00:05:25.821 EAL: Detected lcore 94 as core 28 on socket 1 00:05:25.821 EAL: Detected lcore 95 as core 29 on socket 1 00:05:25.821 EAL: Maximum logical cores by configuration: 128 00:05:25.821 EAL: Detected CPU lcores: 96 00:05:25.821 EAL: Detected NUMA nodes: 2 00:05:25.821 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:25.821 EAL: Detected shared linkage of DPDK 00:05:25.821 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.821 EAL: No shared files mode enabled, IPC is disabled 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:25.821 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:25.821 EAL: Bus pci wants IOVA as 'PA' 00:05:25.821 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:25.821 EAL: Bus vdev wants IOVA as 'DC' 00:05:25.821 EAL: Selected IOVA mode 'PA' 00:05:25.821 EAL: Probing VFIO support... 00:05:25.821 EAL: IOMMU type 1 (Type 1) is supported 00:05:25.821 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:25.821 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:25.821 EAL: VFIO support initialized 00:05:25.821 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.821 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.821 EAL: Setting up physically contiguous memory... 00:05:25.821 EAL: Setting maximum number of open files to 524288 00:05:25.821 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.821 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:25.821 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.821 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.821 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.821 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.822 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:25.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.822 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:25.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.822 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:25.822 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:25.822 EAL: Hugepages will be freed exactly as allocated. 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: TSC frequency is ~2100000 KHz 00:05:25.822 EAL: Main lcore 0 is ready (tid=7f9ccc220b00;cpuset=[0]) 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 0 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.822 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001000000 00:05:25.822 EAL: PCI memory mapped at 0x202001001000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001002000 00:05:25.822 EAL: PCI memory mapped at 0x202001003000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001004000 00:05:25.822 EAL: PCI memory mapped at 0x202001005000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001006000 00:05:25.822 EAL: PCI memory mapped at 0x202001007000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001008000 00:05:25.822 EAL: PCI memory mapped at 0x202001009000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200100a000 00:05:25.822 EAL: PCI memory mapped at 0x20200100b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200100c000 00:05:25.822 EAL: PCI memory mapped at 0x20200100d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200100e000 00:05:25.822 EAL: PCI memory mapped at 0x20200100f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001010000 00:05:25.822 EAL: PCI memory mapped at 0x202001011000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001012000 00:05:25.822 EAL: PCI memory mapped at 0x202001013000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001014000 00:05:25.822 EAL: PCI memory mapped at 0x202001015000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001016000 00:05:25.822 EAL: PCI memory mapped at 0x202001017000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001018000 00:05:25.822 EAL: PCI memory mapped at 0x202001019000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200101a000 00:05:25.822 EAL: PCI memory mapped at 0x20200101b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200101c000 00:05:25.822 EAL: PCI memory mapped at 0x20200101d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200101e000 00:05:25.822 EAL: PCI memory mapped at 0x20200101f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001020000 00:05:25.822 EAL: PCI memory mapped at 0x202001021000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001022000 00:05:25.822 EAL: PCI memory mapped at 0x202001023000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001024000 00:05:25.822 EAL: PCI memory mapped at 0x202001025000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001026000 00:05:25.822 EAL: PCI memory mapped at 0x202001027000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001028000 00:05:25.822 EAL: PCI memory mapped at 0x202001029000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200102a000 00:05:25.822 EAL: PCI memory mapped at 0x20200102b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200102c000 00:05:25.822 EAL: PCI memory mapped at 0x20200102d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200102e000 00:05:25.822 EAL: PCI memory mapped at 0x20200102f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001030000 00:05:25.822 EAL: PCI memory mapped at 0x202001031000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001032000 00:05:25.822 EAL: PCI memory mapped at 0x202001033000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001034000 00:05:25.822 EAL: PCI memory mapped at 0x202001035000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001036000 00:05:25.822 EAL: PCI memory mapped at 0x202001037000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001038000 00:05:25.822 EAL: PCI memory mapped at 0x202001039000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200103a000 00:05:25.822 EAL: PCI memory mapped at 0x20200103b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200103c000 00:05:25.822 EAL: PCI memory mapped at 0x20200103d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200103e000 00:05:25.822 EAL: PCI memory mapped at 0x20200103f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001040000 00:05:25.822 EAL: PCI memory mapped at 0x202001041000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001042000 00:05:25.822 EAL: PCI memory mapped at 0x202001043000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001044000 00:05:25.822 EAL: PCI memory mapped at 0x202001045000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001046000 00:05:25.822 EAL: PCI memory mapped at 0x202001047000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001048000 00:05:25.822 EAL: PCI memory mapped at 0x202001049000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200104a000 00:05:25.822 EAL: PCI memory mapped at 0x20200104b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200104c000 00:05:25.822 EAL: PCI memory mapped at 0x20200104d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200104e000 00:05:25.822 EAL: PCI memory mapped at 0x20200104f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001050000 00:05:25.822 EAL: PCI memory mapped at 0x202001051000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001052000 00:05:25.822 EAL: PCI memory mapped at 0x202001053000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001054000 00:05:25.822 EAL: PCI memory mapped at 0x202001055000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001056000 00:05:25.822 EAL: PCI memory mapped at 0x202001057000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x202001058000 00:05:25.822 EAL: PCI memory mapped at 0x202001059000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200105a000 00:05:25.822 EAL: PCI memory mapped at 0x20200105b000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200105c000 00:05:25.822 EAL: PCI memory mapped at 0x20200105d000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:25.822 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:25.822 EAL: probe driver: 8086:37c9 qat 00:05:25.822 EAL: PCI memory mapped at 0x20200105e000 00:05:25.822 EAL: PCI memory mapped at 0x20200105f000 00:05:25.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.822 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.822 00:05:25.822 00:05:25.822 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.822 http://cunit.sourceforge.net/ 00:05:25.822 00:05:25.822 00:05:25.822 Suite: components_suite 00:05:25.822 Test: vtophys_malloc_test ...passed 00:05:25.822 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.822 EAL: request: mp_malloc_sync 00:05:25.822 EAL: No shared files mode enabled, IPC is disabled 00:05:25.822 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.822 EAL: Trying to obtain current memory policy. 00:05:25.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.822 EAL: Restoring previous memory policy: 4 00:05:25.823 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.823 EAL: request: mp_malloc_sync 00:05:25.823 EAL: No shared files mode enabled, IPC is disabled 00:05:25.823 EAL: Heap on socket 0 was expanded by 130MB 00:05:26.081 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.081 EAL: request: mp_malloc_sync 00:05:26.081 EAL: No shared files mode enabled, IPC is disabled 00:05:26.081 EAL: Heap on socket 0 was shrunk by 130MB 00:05:26.081 EAL: Trying to obtain current memory policy. 00:05:26.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.081 EAL: Restoring previous memory policy: 4 00:05:26.081 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.081 EAL: request: mp_malloc_sync 00:05:26.081 EAL: No shared files mode enabled, IPC is disabled 00:05:26.081 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.081 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.081 EAL: request: mp_malloc_sync 00:05:26.081 EAL: No shared files mode enabled, IPC is disabled 00:05:26.081 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.081 EAL: Trying to obtain current memory policy. 00:05:26.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.338 EAL: Restoring previous memory policy: 4 00:05:26.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.338 EAL: request: mp_malloc_sync 00:05:26.338 EAL: No shared files mode enabled, IPC is disabled 00:05:26.338 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.338 EAL: request: mp_malloc_sync 00:05:26.338 EAL: No shared files mode enabled, IPC is disabled 00:05:26.338 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.338 EAL: Trying to obtain current memory policy. 00:05:26.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.596 EAL: Restoring previous memory policy: 4 00:05:26.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.596 EAL: request: mp_malloc_sync 00:05:26.596 EAL: No shared files mode enabled, IPC is disabled 00:05:26.596 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.855 EAL: request: mp_malloc_sync 00:05:26.855 EAL: No shared files mode enabled, IPC is disabled 00:05:26.855 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.855 passed 00:05:26.855 00:05:26.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.855 suites 1 1 n/a 0 0 00:05:26.855 tests 2 2 2 0 0 00:05:26.855 asserts 6730 6730 6730 0 n/a 00:05:26.855 00:05:26.855 Elapsed time = 1.039 seconds 00:05:26.855 EAL: No shared files mode enabled, IPC is disabled 00:05:26.855 EAL: No shared files mode enabled, IPC is disabled 00:05:26.855 EAL: No shared files mode enabled, IPC is disabled 00:05:26.855 00:05:26.855 real 0m1.206s 00:05:26.855 user 0m0.690s 00:05:26.855 sys 0m0.480s 00:05:26.855 02:59:57 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.855 02:59:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.855 ************************************ 00:05:26.855 END TEST env_vtophys 00:05:26.855 ************************************ 00:05:26.855 02:59:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.855 02:59:58 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.855 02:59:58 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.855 02:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.114 ************************************ 00:05:27.115 START TEST env_pci 00:05:27.115 ************************************ 00:05:27.115 02:59:58 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:27.115 00:05:27.115 00:05:27.115 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.115 http://cunit.sourceforge.net/ 00:05:27.115 00:05:27.115 00:05:27.115 Suite: pci 00:05:27.115 Test: pci_hook ...[2024-05-15 02:59:58.071302] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3992277 has claimed it 00:05:27.115 EAL: Cannot find device (10000:00:01.0) 00:05:27.115 EAL: Failed to attach device on primary process 00:05:27.115 passed 00:05:27.115 00:05:27.115 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.115 suites 1 1 n/a 0 0 00:05:27.115 tests 1 1 1 0 0 00:05:27.115 asserts 25 25 25 0 n/a 00:05:27.115 00:05:27.115 Elapsed time = 0.031 seconds 00:05:27.115 00:05:27.115 real 0m0.055s 00:05:27.115 user 0m0.015s 00:05:27.115 sys 0m0.039s 00:05:27.115 02:59:58 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.115 02:59:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.115 ************************************ 00:05:27.115 END TEST env_pci 00:05:27.115 ************************************ 00:05:27.115 02:59:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.115 02:59:58 env -- env/env.sh@15 -- # uname 00:05:27.115 02:59:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.115 02:59:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.115 02:59:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.115 02:59:58 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:27.115 02:59:58 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.115 02:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.115 ************************************ 00:05:27.115 START TEST env_dpdk_post_init 00:05:27.115 ************************************ 00:05:27.115 02:59:58 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.115 EAL: Detected CPU lcores: 96 00:05:27.115 EAL: Detected NUMA nodes: 2 00:05:27.115 EAL: Detected shared linkage of DPDK 00:05:27.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.115 EAL: Selected IOVA mode 'PA' 00:05:27.115 EAL: VFIO support initialized 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.115 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.115 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:27.115 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:27.116 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:27.116 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:27.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.375 EAL: Using IOMMU type 1 (Type 1) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:27.375 EAL: Ignore mapping IO port bar(1) 00:05:27.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:28.307 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:28.307 EAL: Ignore mapping IO port bar(1) 00:05:28.307 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:31.589 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:31.589 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001080000 00:05:31.589 Starting DPDK initialization... 00:05:31.589 Starting SPDK post initialization... 00:05:31.589 SPDK NVMe probe 00:05:31.589 Attaching to 0000:5e:00.0 00:05:31.589 Attached to 0000:5e:00.0 00:05:31.589 Cleaning up... 00:05:31.589 00:05:31.589 real 0m4.383s 00:05:31.589 user 0m3.288s 00:05:31.589 sys 0m0.165s 00:05:31.589 03:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.589 03:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 ************************************ 00:05:31.589 END TEST env_dpdk_post_init 00:05:31.589 ************************************ 00:05:31.589 03:00:02 env -- env/env.sh@26 -- # uname 00:05:31.589 03:00:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.589 03:00:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.589 03:00:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.589 03:00:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.589 03:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 ************************************ 00:05:31.589 START TEST env_mem_callbacks 00:05:31.589 ************************************ 00:05:31.589 03:00:02 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.589 EAL: Detected CPU lcores: 96 00:05:31.589 EAL: Detected NUMA nodes: 2 00:05:31.589 EAL: Detected shared linkage of DPDK 00:05:31.589 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.589 EAL: Selected IOVA mode 'PA' 00:05:31.589 EAL: VFIO support initialized 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.589 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:31.589 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.589 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.590 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:31.590 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:31.590 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.591 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.591 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.591 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.591 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.591 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.591 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:31.591 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.592 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:31.592 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.592 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.593 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:31.593 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.593 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:31.594 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:31.594 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:31.594 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.594 00:05:31.594 00:05:31.594 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.594 http://cunit.sourceforge.net/ 00:05:31.594 00:05:31.594 00:05:31.594 Suite: memory 00:05:31.594 Test: test ... 00:05:31.594 register 0x200000200000 2097152 00:05:31.594 malloc 3145728 00:05:31.594 register 0x200000400000 4194304 00:05:31.594 buf 0x200000500000 len 3145728 PASSED 00:05:31.594 malloc 64 00:05:31.594 buf 0x2000004fff40 len 64 PASSED 00:05:31.594 malloc 4194304 00:05:31.594 register 0x200000800000 6291456 00:05:31.594 buf 0x200000a00000 len 4194304 PASSED 00:05:31.594 free 0x200000500000 3145728 00:05:31.594 free 0x2000004fff40 64 00:05:31.594 unregister 0x200000400000 4194304 PASSED 00:05:31.594 free 0x200000a00000 4194304 00:05:31.594 unregister 0x200000800000 6291456 PASSED 00:05:31.594 malloc 8388608 00:05:31.594 register 0x200000400000 10485760 00:05:31.594 buf 0x200000600000 len 8388608 PASSED 00:05:31.594 free 0x200000600000 8388608 00:05:31.594 unregister 0x200000400000 10485760 PASSED 00:05:31.594 passed 00:05:31.594 00:05:31.594 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.594 suites 1 1 n/a 0 0 00:05:31.594 tests 1 1 1 0 0 00:05:31.594 asserts 15 15 15 0 n/a 00:05:31.594 00:05:31.594 Elapsed time = 0.006 seconds 00:05:31.594 00:05:31.594 real 0m0.086s 00:05:31.594 user 0m0.033s 00:05:31.594 sys 0m0.053s 00:05:31.594 03:00:02 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.595 03:00:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.595 ************************************ 00:05:31.595 END TEST env_mem_callbacks 00:05:31.595 ************************************ 00:05:31.595 00:05:31.595 real 0m6.388s 00:05:31.595 user 0m4.406s 00:05:31.595 sys 0m1.033s 00:05:31.595 03:00:02 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.595 03:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.595 ************************************ 00:05:31.595 END TEST env 00:05:31.595 ************************************ 00:05:31.861 03:00:02 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.861 03:00:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.861 03:00:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.861 03:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:31.861 ************************************ 00:05:31.861 START TEST rpc 00:05:31.861 ************************************ 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.861 * Looking for test storage... 00:05:31.861 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:31.861 03:00:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3993220 00:05:31.861 03:00:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:31.861 03:00:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.861 03:00:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3993220 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@827 -- # '[' -z 3993220 ']' 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.861 03:00:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.861 [2024-05-15 03:00:02.954647] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:31.861 [2024-05-15 03:00:02.954692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993220 ] 00:05:32.118 [2024-05-15 03:00:03.041597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.118 [2024-05-15 03:00:03.136286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.118 [2024-05-15 03:00:03.136331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3993220' to capture a snapshot of events at runtime. 00:05:32.118 [2024-05-15 03:00:03.136344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.118 [2024-05-15 03:00:03.136354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.118 [2024-05-15 03:00:03.136361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3993220 for offline analysis/debug. 00:05:32.119 [2024-05-15 03:00:03.136387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.053 03:00:03 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.053 03:00:03 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:33.053 03:00:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:33.053 03:00:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:33.053 03:00:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:33.053 03:00:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:33.053 03:00:03 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.053 03:00:03 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.053 03:00:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 ************************************ 00:05:33.053 START TEST rpc_integrity 00:05:33.053 ************************************ 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:33.053 03:00:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.053 03:00:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.053 03:00:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.053 03:00:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.053 { 00:05:33.053 "name": "Malloc0", 00:05:33.053 "aliases": [ 00:05:33.053 "b95d8418-2022-434c-bb61-2e3993bddd60" 00:05:33.053 ], 00:05:33.053 "product_name": "Malloc disk", 00:05:33.053 "block_size": 512, 00:05:33.053 "num_blocks": 16384, 00:05:33.053 "uuid": "b95d8418-2022-434c-bb61-2e3993bddd60", 00:05:33.053 "assigned_rate_limits": { 00:05:33.053 "rw_ios_per_sec": 0, 00:05:33.053 "rw_mbytes_per_sec": 0, 00:05:33.053 "r_mbytes_per_sec": 0, 00:05:33.053 "w_mbytes_per_sec": 0 00:05:33.053 }, 00:05:33.053 "claimed": false, 00:05:33.053 "zoned": false, 00:05:33.053 "supported_io_types": { 00:05:33.053 "read": true, 00:05:33.053 "write": true, 00:05:33.053 "unmap": true, 00:05:33.053 "write_zeroes": true, 00:05:33.053 "flush": true, 00:05:33.053 "reset": true, 00:05:33.053 "compare": false, 00:05:33.053 "compare_and_write": false, 00:05:33.053 "abort": true, 00:05:33.053 "nvme_admin": false, 00:05:33.053 "nvme_io": false 00:05:33.053 }, 00:05:33.053 "memory_domains": [ 00:05:33.053 { 00:05:33.053 "dma_device_id": "system", 00:05:33.053 "dma_device_type": 1 00:05:33.053 }, 00:05:33.053 { 00:05:33.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.053 "dma_device_type": 2 00:05:33.053 } 00:05:33.053 ], 00:05:33.053 "driver_specific": {} 00:05:33.053 } 00:05:33.053 ]' 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 [2024-05-15 03:00:04.084004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.053 [2024-05-15 03:00:04.084045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.053 [2024-05-15 03:00:04.084062] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16087b0 00:05:33.053 [2024-05-15 03:00:04.084072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.053 [2024-05-15 03:00:04.085709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.053 [2024-05-15 03:00:04.085736] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.053 Passthru0 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.053 { 00:05:33.053 "name": "Malloc0", 00:05:33.053 "aliases": [ 00:05:33.053 "b95d8418-2022-434c-bb61-2e3993bddd60" 00:05:33.053 ], 00:05:33.053 "product_name": "Malloc disk", 00:05:33.053 "block_size": 512, 00:05:33.053 "num_blocks": 16384, 00:05:33.053 "uuid": "b95d8418-2022-434c-bb61-2e3993bddd60", 00:05:33.053 "assigned_rate_limits": { 00:05:33.053 "rw_ios_per_sec": 0, 00:05:33.053 "rw_mbytes_per_sec": 0, 00:05:33.053 "r_mbytes_per_sec": 0, 00:05:33.053 "w_mbytes_per_sec": 0 00:05:33.053 }, 00:05:33.053 "claimed": true, 00:05:33.053 "claim_type": "exclusive_write", 00:05:33.053 "zoned": false, 00:05:33.053 "supported_io_types": { 00:05:33.053 "read": true, 00:05:33.053 "write": true, 00:05:33.053 "unmap": true, 00:05:33.053 "write_zeroes": true, 00:05:33.053 "flush": true, 00:05:33.053 "reset": true, 00:05:33.053 "compare": false, 00:05:33.053 "compare_and_write": false, 00:05:33.053 "abort": true, 00:05:33.053 "nvme_admin": false, 00:05:33.053 "nvme_io": false 00:05:33.053 }, 00:05:33.053 "memory_domains": [ 00:05:33.053 { 00:05:33.053 "dma_device_id": "system", 00:05:33.053 "dma_device_type": 1 00:05:33.053 }, 00:05:33.053 { 00:05:33.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.053 "dma_device_type": 2 00:05:33.053 } 00:05:33.053 ], 00:05:33.053 "driver_specific": {} 00:05:33.053 }, 00:05:33.053 { 00:05:33.053 "name": "Passthru0", 00:05:33.053 "aliases": [ 00:05:33.053 "fc0690a4-b6c5-5650-8f6e-bc379feb3d92" 00:05:33.053 ], 00:05:33.053 "product_name": "passthru", 00:05:33.053 "block_size": 512, 00:05:33.053 "num_blocks": 16384, 00:05:33.053 "uuid": "fc0690a4-b6c5-5650-8f6e-bc379feb3d92", 00:05:33.053 "assigned_rate_limits": { 00:05:33.053 "rw_ios_per_sec": 0, 00:05:33.053 "rw_mbytes_per_sec": 0, 00:05:33.053 "r_mbytes_per_sec": 0, 00:05:33.053 "w_mbytes_per_sec": 0 00:05:33.053 }, 00:05:33.053 "claimed": false, 00:05:33.053 "zoned": false, 00:05:33.053 "supported_io_types": { 00:05:33.053 "read": true, 00:05:33.053 "write": true, 00:05:33.053 "unmap": true, 00:05:33.053 "write_zeroes": true, 00:05:33.053 "flush": true, 00:05:33.053 "reset": true, 00:05:33.053 "compare": false, 00:05:33.053 "compare_and_write": false, 00:05:33.053 "abort": true, 00:05:33.053 "nvme_admin": false, 00:05:33.053 "nvme_io": false 00:05:33.053 }, 00:05:33.053 "memory_domains": [ 00:05:33.053 { 00:05:33.053 "dma_device_id": "system", 00:05:33.053 "dma_device_type": 1 00:05:33.053 }, 00:05:33.053 { 00:05:33.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.053 "dma_device_type": 2 00:05:33.053 } 00:05:33.053 ], 00:05:33.053 "driver_specific": { 00:05:33.053 "passthru": { 00:05:33.053 "name": "Passthru0", 00:05:33.053 "base_bdev_name": "Malloc0" 00:05:33.053 } 00:05:33.053 } 00:05:33.053 } 00:05:33.053 ]' 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.053 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.311 03:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.311 00:05:33.311 real 0m0.300s 00:05:33.311 user 0m0.193s 00:05:33.311 sys 0m0.039s 00:05:33.311 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 ************************************ 00:05:33.311 END TEST rpc_integrity 00:05:33.311 ************************************ 00:05:33.311 03:00:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.311 03:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.311 03:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.311 03:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 ************************************ 00:05:33.311 START TEST rpc_plugins 00:05:33.311 ************************************ 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.311 { 00:05:33.311 "name": "Malloc1", 00:05:33.311 "aliases": [ 00:05:33.311 "2863db10-eb3f-4a31-a941-f8dc9de9e162" 00:05:33.311 ], 00:05:33.311 "product_name": "Malloc disk", 00:05:33.311 "block_size": 4096, 00:05:33.311 "num_blocks": 256, 00:05:33.311 "uuid": "2863db10-eb3f-4a31-a941-f8dc9de9e162", 00:05:33.311 "assigned_rate_limits": { 00:05:33.311 "rw_ios_per_sec": 0, 00:05:33.311 "rw_mbytes_per_sec": 0, 00:05:33.311 "r_mbytes_per_sec": 0, 00:05:33.311 "w_mbytes_per_sec": 0 00:05:33.311 }, 00:05:33.311 "claimed": false, 00:05:33.311 "zoned": false, 00:05:33.311 "supported_io_types": { 00:05:33.311 "read": true, 00:05:33.311 "write": true, 00:05:33.311 "unmap": true, 00:05:33.311 "write_zeroes": true, 00:05:33.311 "flush": true, 00:05:33.311 "reset": true, 00:05:33.311 "compare": false, 00:05:33.311 "compare_and_write": false, 00:05:33.311 "abort": true, 00:05:33.311 "nvme_admin": false, 00:05:33.311 "nvme_io": false 00:05:33.311 }, 00:05:33.311 "memory_domains": [ 00:05:33.311 { 00:05:33.311 "dma_device_id": "system", 00:05:33.311 "dma_device_type": 1 00:05:33.311 }, 00:05:33.311 { 00:05:33.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.311 "dma_device_type": 2 00:05:33.311 } 00:05:33.311 ], 00:05:33.311 "driver_specific": {} 00:05:33.311 } 00:05:33.311 ]' 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.311 03:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.311 00:05:33.311 real 0m0.145s 00:05:33.311 user 0m0.097s 00:05:33.311 sys 0m0.017s 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.311 03:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 ************************************ 00:05:33.311 END TEST rpc_plugins 00:05:33.311 ************************************ 00:05:33.570 03:00:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.570 03:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.570 03:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.570 03:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.570 ************************************ 00:05:33.570 START TEST rpc_trace_cmd_test 00:05:33.570 ************************************ 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.570 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3993220", 00:05:33.570 "tpoint_group_mask": "0x8", 00:05:33.570 "iscsi_conn": { 00:05:33.570 "mask": "0x2", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "scsi": { 00:05:33.570 "mask": "0x4", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "bdev": { 00:05:33.570 "mask": "0x8", 00:05:33.570 "tpoint_mask": "0xffffffffffffffff" 00:05:33.570 }, 00:05:33.570 "nvmf_rdma": { 00:05:33.570 "mask": "0x10", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "nvmf_tcp": { 00:05:33.570 "mask": "0x20", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "ftl": { 00:05:33.570 "mask": "0x40", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "blobfs": { 00:05:33.570 "mask": "0x80", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "dsa": { 00:05:33.570 "mask": "0x200", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "thread": { 00:05:33.570 "mask": "0x400", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "nvme_pcie": { 00:05:33.570 "mask": "0x800", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "iaa": { 00:05:33.570 "mask": "0x1000", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "nvme_tcp": { 00:05:33.570 "mask": "0x2000", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "bdev_nvme": { 00:05:33.570 "mask": "0x4000", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 }, 00:05:33.570 "sock": { 00:05:33.570 "mask": "0x8000", 00:05:33.570 "tpoint_mask": "0x0" 00:05:33.570 } 00:05:33.570 }' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.570 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.828 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.828 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.828 03:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.828 00:05:33.828 real 0m0.251s 00:05:33.828 user 0m0.209s 00:05:33.828 sys 0m0.034s 00:05:33.828 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.828 03:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.828 ************************************ 00:05:33.828 END TEST rpc_trace_cmd_test 00:05:33.828 ************************************ 00:05:33.828 03:00:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.828 03:00:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.828 03:00:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.828 03:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.828 03:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.828 03:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.828 ************************************ 00:05:33.828 START TEST rpc_daemon_integrity 00:05:33.828 ************************************ 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.828 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.829 { 00:05:33.829 "name": "Malloc2", 00:05:33.829 "aliases": [ 00:05:33.829 "652a7196-8891-4caf-86e9-d7197e816040" 00:05:33.829 ], 00:05:33.829 "product_name": "Malloc disk", 00:05:33.829 "block_size": 512, 00:05:33.829 "num_blocks": 16384, 00:05:33.829 "uuid": "652a7196-8891-4caf-86e9-d7197e816040", 00:05:33.829 "assigned_rate_limits": { 00:05:33.829 "rw_ios_per_sec": 0, 00:05:33.829 "rw_mbytes_per_sec": 0, 00:05:33.829 "r_mbytes_per_sec": 0, 00:05:33.829 "w_mbytes_per_sec": 0 00:05:33.829 }, 00:05:33.829 "claimed": false, 00:05:33.829 "zoned": false, 00:05:33.829 "supported_io_types": { 00:05:33.829 "read": true, 00:05:33.829 "write": true, 00:05:33.829 "unmap": true, 00:05:33.829 "write_zeroes": true, 00:05:33.829 "flush": true, 00:05:33.829 "reset": true, 00:05:33.829 "compare": false, 00:05:33.829 "compare_and_write": false, 00:05:33.829 "abort": true, 00:05:33.829 "nvme_admin": false, 00:05:33.829 "nvme_io": false 00:05:33.829 }, 00:05:33.829 "memory_domains": [ 00:05:33.829 { 00:05:33.829 "dma_device_id": "system", 00:05:33.829 "dma_device_type": 1 00:05:33.829 }, 00:05:33.829 { 00:05:33.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.829 "dma_device_type": 2 00:05:33.829 } 00:05:33.829 ], 00:05:33.829 "driver_specific": {} 00:05:33.829 } 00:05:33.829 ]' 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.829 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 [2024-05-15 03:00:04.990611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.089 [2024-05-15 03:00:04.990650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.089 [2024-05-15 03:00:04.990666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b1af0 00:05:34.089 [2024-05-15 03:00:04.990675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.089 [2024-05-15 03:00:04.992142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.089 [2024-05-15 03:00:04.992168] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.089 Passthru0 00:05:34.089 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.089 03:00:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.089 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.089 03:00:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.089 { 00:05:34.089 "name": "Malloc2", 00:05:34.089 "aliases": [ 00:05:34.089 "652a7196-8891-4caf-86e9-d7197e816040" 00:05:34.089 ], 00:05:34.089 "product_name": "Malloc disk", 00:05:34.089 "block_size": 512, 00:05:34.089 "num_blocks": 16384, 00:05:34.089 "uuid": "652a7196-8891-4caf-86e9-d7197e816040", 00:05:34.089 "assigned_rate_limits": { 00:05:34.089 "rw_ios_per_sec": 0, 00:05:34.089 "rw_mbytes_per_sec": 0, 00:05:34.089 "r_mbytes_per_sec": 0, 00:05:34.089 "w_mbytes_per_sec": 0 00:05:34.089 }, 00:05:34.089 "claimed": true, 00:05:34.089 "claim_type": "exclusive_write", 00:05:34.089 "zoned": false, 00:05:34.089 "supported_io_types": { 00:05:34.089 "read": true, 00:05:34.089 "write": true, 00:05:34.089 "unmap": true, 00:05:34.089 "write_zeroes": true, 00:05:34.089 "flush": true, 00:05:34.089 "reset": true, 00:05:34.089 "compare": false, 00:05:34.089 "compare_and_write": false, 00:05:34.089 "abort": true, 00:05:34.089 "nvme_admin": false, 00:05:34.089 "nvme_io": false 00:05:34.089 }, 00:05:34.089 "memory_domains": [ 00:05:34.089 { 00:05:34.089 "dma_device_id": "system", 00:05:34.089 "dma_device_type": 1 00:05:34.089 }, 00:05:34.089 { 00:05:34.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.089 "dma_device_type": 2 00:05:34.089 } 00:05:34.089 ], 00:05:34.089 "driver_specific": {} 00:05:34.089 }, 00:05:34.089 { 00:05:34.089 "name": "Passthru0", 00:05:34.089 "aliases": [ 00:05:34.089 "4f3158b4-ba44-5dfa-a678-60af38c1f759" 00:05:34.089 ], 00:05:34.089 "product_name": "passthru", 00:05:34.089 "block_size": 512, 00:05:34.089 "num_blocks": 16384, 00:05:34.089 "uuid": "4f3158b4-ba44-5dfa-a678-60af38c1f759", 00:05:34.089 "assigned_rate_limits": { 00:05:34.089 "rw_ios_per_sec": 0, 00:05:34.089 "rw_mbytes_per_sec": 0, 00:05:34.089 "r_mbytes_per_sec": 0, 00:05:34.089 "w_mbytes_per_sec": 0 00:05:34.089 }, 00:05:34.089 "claimed": false, 00:05:34.089 "zoned": false, 00:05:34.089 "supported_io_types": { 00:05:34.089 "read": true, 00:05:34.089 "write": true, 00:05:34.089 "unmap": true, 00:05:34.089 "write_zeroes": true, 00:05:34.089 "flush": true, 00:05:34.089 "reset": true, 00:05:34.089 "compare": false, 00:05:34.089 "compare_and_write": false, 00:05:34.089 "abort": true, 00:05:34.089 "nvme_admin": false, 00:05:34.089 "nvme_io": false 00:05:34.089 }, 00:05:34.089 "memory_domains": [ 00:05:34.089 { 00:05:34.089 "dma_device_id": "system", 00:05:34.089 "dma_device_type": 1 00:05:34.089 }, 00:05:34.089 { 00:05:34.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.089 "dma_device_type": 2 00:05:34.089 } 00:05:34.089 ], 00:05:34.089 "driver_specific": { 00:05:34.089 "passthru": { 00:05:34.089 "name": "Passthru0", 00:05:34.089 "base_bdev_name": "Malloc2" 00:05:34.089 } 00:05:34.089 } 00:05:34.089 } 00:05:34.089 ]' 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.089 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.090 03:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.090 00:05:34.090 real 0m0.292s 00:05:34.090 user 0m0.191s 00:05:34.090 sys 0m0.036s 00:05:34.090 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.090 03:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.090 ************************************ 00:05:34.090 END TEST rpc_daemon_integrity 00:05:34.090 ************************************ 00:05:34.090 03:00:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.090 03:00:05 rpc -- rpc/rpc.sh@84 -- # killprocess 3993220 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@946 -- # '[' -z 3993220 ']' 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@950 -- # kill -0 3993220 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@951 -- # uname 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3993220 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3993220' 00:05:34.090 killing process with pid 3993220 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@965 -- # kill 3993220 00:05:34.090 03:00:05 rpc -- common/autotest_common.sh@970 -- # wait 3993220 00:05:34.656 00:05:34.656 real 0m2.781s 00:05:34.656 user 0m3.668s 00:05:34.656 sys 0m0.730s 00:05:34.656 03:00:05 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.656 03:00:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 END TEST rpc 00:05:34.656 ************************************ 00:05:34.656 03:00:05 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.656 03:00:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.656 03:00:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.656 03:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 START TEST skip_rpc 00:05:34.656 ************************************ 00:05:34.656 03:00:05 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.656 * Looking for test storage... 00:05:34.656 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:34.656 03:00:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:34.656 03:00:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:34.656 03:00:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.656 03:00:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.656 03:00:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.656 03:00:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 START TEST skip_rpc 00:05:34.656 ************************************ 00:05:34.656 03:00:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:34.656 03:00:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3993970 00:05:34.656 03:00:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.656 03:00:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.656 03:00:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.914 [2024-05-15 03:00:05.849046] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:34.914 [2024-05-15 03:00:05.849110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993970 ] 00:05:34.914 [2024-05-15 03:00:05.945734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.914 [2024-05-15 03:00:06.040782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3993970 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3993970 ']' 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3993970 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3993970 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3993970' 00:05:40.181 killing process with pid 3993970 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3993970 00:05:40.181 03:00:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3993970 00:05:40.181 00:05:40.181 real 0m5.432s 00:05:40.181 user 0m5.139s 00:05:40.181 sys 0m0.302s 00:05:40.181 03:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.181 03:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.181 ************************************ 00:05:40.181 END TEST skip_rpc 00:05:40.181 ************************************ 00:05:40.181 03:00:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.181 03:00:11 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.181 03:00:11 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.181 03:00:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.181 ************************************ 00:05:40.181 START TEST skip_rpc_with_json 00:05:40.181 ************************************ 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3995242 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3995242 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3995242 ']' 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.181 03:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.438 [2024-05-15 03:00:11.356749] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:40.438 [2024-05-15 03:00:11.356806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995242 ] 00:05:40.438 [2024-05-15 03:00:11.453738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.438 [2024-05-15 03:00:11.549316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.370 [2024-05-15 03:00:12.315495] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:41.370 request: 00:05:41.370 { 00:05:41.370 "trtype": "tcp", 00:05:41.370 "method": "nvmf_get_transports", 00:05:41.370 "req_id": 1 00:05:41.370 } 00:05:41.370 Got JSON-RPC error response 00:05:41.370 response: 00:05:41.370 { 00:05:41.370 "code": -19, 00:05:41.370 "message": "No such device" 00:05:41.370 } 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.370 [2024-05-15 03:00:12.327635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.370 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:41.370 { 00:05:41.370 "subsystems": [ 00:05:41.370 { 00:05:41.370 "subsystem": "keyring", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "iobuf", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "iobuf_set_options", 00:05:41.370 "params": { 00:05:41.370 "small_pool_count": 8192, 00:05:41.370 "large_pool_count": 1024, 00:05:41.370 "small_bufsize": 8192, 00:05:41.370 "large_bufsize": 135168 00:05:41.370 } 00:05:41.370 } 00:05:41.370 ] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "sock", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "sock_impl_set_options", 00:05:41.370 "params": { 00:05:41.370 "impl_name": "posix", 00:05:41.370 "recv_buf_size": 2097152, 00:05:41.370 "send_buf_size": 2097152, 00:05:41.370 "enable_recv_pipe": true, 00:05:41.370 "enable_quickack": false, 00:05:41.370 "enable_placement_id": 0, 00:05:41.370 "enable_zerocopy_send_server": true, 00:05:41.370 "enable_zerocopy_send_client": false, 00:05:41.370 "zerocopy_threshold": 0, 00:05:41.370 "tls_version": 0, 00:05:41.370 "enable_ktls": false 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "sock_impl_set_options", 00:05:41.370 "params": { 00:05:41.370 "impl_name": "ssl", 00:05:41.370 "recv_buf_size": 4096, 00:05:41.370 "send_buf_size": 4096, 00:05:41.370 "enable_recv_pipe": true, 00:05:41.370 "enable_quickack": false, 00:05:41.370 "enable_placement_id": 0, 00:05:41.370 "enable_zerocopy_send_server": true, 00:05:41.370 "enable_zerocopy_send_client": false, 00:05:41.370 "zerocopy_threshold": 0, 00:05:41.370 "tls_version": 0, 00:05:41.370 "enable_ktls": false 00:05:41.370 } 00:05:41.370 } 00:05:41.370 ] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "vmd", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "accel", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "accel_set_options", 00:05:41.370 "params": { 00:05:41.370 "small_cache_size": 128, 00:05:41.370 "large_cache_size": 16, 00:05:41.370 "task_count": 2048, 00:05:41.370 "sequence_count": 2048, 00:05:41.370 "buf_count": 2048 00:05:41.370 } 00:05:41.370 } 00:05:41.370 ] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "bdev", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "bdev_set_options", 00:05:41.370 "params": { 00:05:41.370 "bdev_io_pool_size": 65535, 00:05:41.370 "bdev_io_cache_size": 256, 00:05:41.370 "bdev_auto_examine": true, 00:05:41.370 "iobuf_small_cache_size": 128, 00:05:41.370 "iobuf_large_cache_size": 16 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "bdev_raid_set_options", 00:05:41.370 "params": { 00:05:41.370 "process_window_size_kb": 1024 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "bdev_iscsi_set_options", 00:05:41.370 "params": { 00:05:41.370 "timeout_sec": 30 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "bdev_nvme_set_options", 00:05:41.370 "params": { 00:05:41.370 "action_on_timeout": "none", 00:05:41.370 "timeout_us": 0, 00:05:41.370 "timeout_admin_us": 0, 00:05:41.370 "keep_alive_timeout_ms": 10000, 00:05:41.370 "arbitration_burst": 0, 00:05:41.370 "low_priority_weight": 0, 00:05:41.370 "medium_priority_weight": 0, 00:05:41.370 "high_priority_weight": 0, 00:05:41.370 "nvme_adminq_poll_period_us": 10000, 00:05:41.370 "nvme_ioq_poll_period_us": 0, 00:05:41.370 "io_queue_requests": 0, 00:05:41.370 "delay_cmd_submit": true, 00:05:41.370 "transport_retry_count": 4, 00:05:41.370 "bdev_retry_count": 3, 00:05:41.370 "transport_ack_timeout": 0, 00:05:41.370 "ctrlr_loss_timeout_sec": 0, 00:05:41.370 "reconnect_delay_sec": 0, 00:05:41.370 "fast_io_fail_timeout_sec": 0, 00:05:41.370 "disable_auto_failback": false, 00:05:41.370 "generate_uuids": false, 00:05:41.370 "transport_tos": 0, 00:05:41.370 "nvme_error_stat": false, 00:05:41.370 "rdma_srq_size": 0, 00:05:41.370 "io_path_stat": false, 00:05:41.370 "allow_accel_sequence": false, 00:05:41.370 "rdma_max_cq_size": 0, 00:05:41.370 "rdma_cm_event_timeout_ms": 0, 00:05:41.370 "dhchap_digests": [ 00:05:41.370 "sha256", 00:05:41.370 "sha384", 00:05:41.370 "sha512" 00:05:41.370 ], 00:05:41.370 "dhchap_dhgroups": [ 00:05:41.370 "null", 00:05:41.370 "ffdhe2048", 00:05:41.370 "ffdhe3072", 00:05:41.370 "ffdhe4096", 00:05:41.370 "ffdhe6144", 00:05:41.370 "ffdhe8192" 00:05:41.370 ] 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "bdev_nvme_set_hotplug", 00:05:41.370 "params": { 00:05:41.370 "period_us": 100000, 00:05:41.370 "enable": false 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "bdev_wait_for_examine" 00:05:41.370 } 00:05:41.370 ] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "scsi", 00:05:41.370 "config": null 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "scheduler", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "framework_set_scheduler", 00:05:41.370 "params": { 00:05:41.370 "name": "static" 00:05:41.370 } 00:05:41.370 } 00:05:41.370 ] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "vhost_scsi", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "vhost_blk", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "ublk", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "nbd", 00:05:41.370 "config": [] 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "subsystem": "nvmf", 00:05:41.370 "config": [ 00:05:41.370 { 00:05:41.370 "method": "nvmf_set_config", 00:05:41.370 "params": { 00:05:41.370 "discovery_filter": "match_any", 00:05:41.370 "admin_cmd_passthru": { 00:05:41.370 "identify_ctrlr": false 00:05:41.370 } 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "nvmf_set_max_subsystems", 00:05:41.370 "params": { 00:05:41.370 "max_subsystems": 1024 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "nvmf_set_crdt", 00:05:41.370 "params": { 00:05:41.370 "crdt1": 0, 00:05:41.370 "crdt2": 0, 00:05:41.370 "crdt3": 0 00:05:41.370 } 00:05:41.370 }, 00:05:41.370 { 00:05:41.370 "method": "nvmf_create_transport", 00:05:41.370 "params": { 00:05:41.370 "trtype": "TCP", 00:05:41.370 "max_queue_depth": 128, 00:05:41.370 "max_io_qpairs_per_ctrlr": 127, 00:05:41.370 "in_capsule_data_size": 4096, 00:05:41.370 "max_io_size": 131072, 00:05:41.370 "io_unit_size": 131072, 00:05:41.370 "max_aq_depth": 128, 00:05:41.371 "num_shared_buffers": 511, 00:05:41.371 "buf_cache_size": 4294967295, 00:05:41.371 "dif_insert_or_strip": false, 00:05:41.371 "zcopy": false, 00:05:41.371 "c2h_success": true, 00:05:41.371 "sock_priority": 0, 00:05:41.371 "abort_timeout_sec": 1, 00:05:41.371 "ack_timeout": 0, 00:05:41.371 "data_wr_pool_size": 0 00:05:41.371 } 00:05:41.371 } 00:05:41.371 ] 00:05:41.371 }, 00:05:41.371 { 00:05:41.371 "subsystem": "iscsi", 00:05:41.371 "config": [ 00:05:41.371 { 00:05:41.371 "method": "iscsi_set_options", 00:05:41.371 "params": { 00:05:41.371 "node_base": "iqn.2016-06.io.spdk", 00:05:41.371 "max_sessions": 128, 00:05:41.371 "max_connections_per_session": 2, 00:05:41.371 "max_queue_depth": 64, 00:05:41.371 "default_time2wait": 2, 00:05:41.371 "default_time2retain": 20, 00:05:41.371 "first_burst_length": 8192, 00:05:41.371 "immediate_data": true, 00:05:41.371 "allow_duplicated_isid": false, 00:05:41.371 "error_recovery_level": 0, 00:05:41.371 "nop_timeout": 60, 00:05:41.371 "nop_in_interval": 30, 00:05:41.371 "disable_chap": false, 00:05:41.371 "require_chap": false, 00:05:41.371 "mutual_chap": false, 00:05:41.371 "chap_group": 0, 00:05:41.371 "max_large_datain_per_connection": 64, 00:05:41.371 "max_r2t_per_connection": 4, 00:05:41.371 "pdu_pool_size": 36864, 00:05:41.371 "immediate_data_pool_size": 16384, 00:05:41.371 "data_out_pool_size": 2048 00:05:41.371 } 00:05:41.371 } 00:05:41.371 ] 00:05:41.371 } 00:05:41.371 ] 00:05:41.371 } 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3995242 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3995242 ']' 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3995242 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995242 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995242' 00:05:41.371 killing process with pid 3995242 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3995242 00:05:41.371 03:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3995242 00:05:41.937 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3995615 00:05:41.937 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:41.937 03:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3995615 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3995615 ']' 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3995615 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995615 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995615' 00:05:47.278 killing process with pid 3995615 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3995615 00:05:47.278 03:00:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3995615 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:47.278 00:05:47.278 real 0m7.035s 00:05:47.278 user 0m6.904s 00:05:47.278 sys 0m0.693s 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.278 ************************************ 00:05:47.278 END TEST skip_rpc_with_json 00:05:47.278 ************************************ 00:05:47.278 03:00:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:47.278 03:00:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.278 03:00:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.278 03:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.278 ************************************ 00:05:47.278 START TEST skip_rpc_with_delay 00:05:47.278 ************************************ 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.278 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.536 [2024-05-15 03:00:18.465940] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:47.536 [2024-05-15 03:00:18.466020] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.536 00:05:47.536 real 0m0.084s 00:05:47.536 user 0m0.057s 00:05:47.536 sys 0m0.026s 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.536 03:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.536 ************************************ 00:05:47.536 END TEST skip_rpc_with_delay 00:05:47.536 ************************************ 00:05:47.536 03:00:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.536 03:00:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.536 03:00:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.536 03:00:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.536 03:00:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.536 03:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.536 ************************************ 00:05:47.536 START TEST exit_on_failed_rpc_init 00:05:47.536 ************************************ 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3996584 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3996584 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3996584 ']' 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.536 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.537 03:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.537 [2024-05-15 03:00:18.624578] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:47.537 [2024-05-15 03:00:18.624631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996584 ] 00:05:47.795 [2024-05-15 03:00:18.722162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.795 [2024-05-15 03:00:18.819099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.730 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.730 [2024-05-15 03:00:19.631859] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:48.730 [2024-05-15 03:00:19.631917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996716 ] 00:05:48.730 [2024-05-15 03:00:19.726075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.730 [2024-05-15 03:00:19.815882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.730 [2024-05-15 03:00:19.815959] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.730 [2024-05-15 03:00:19.815972] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.730 [2024-05-15 03:00:19.815981] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3996584 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3996584 ']' 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3996584 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3996584 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3996584' 00:05:48.989 killing process with pid 3996584 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3996584 00:05:48.989 03:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3996584 00:05:49.246 00:05:49.246 real 0m1.786s 00:05:49.246 user 0m2.171s 00:05:49.246 sys 0m0.490s 00:05:49.246 03:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.246 03:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.246 ************************************ 00:05:49.246 END TEST exit_on_failed_rpc_init 00:05:49.246 ************************************ 00:05:49.246 03:00:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:49.246 00:05:49.246 real 0m14.719s 00:05:49.246 user 0m14.417s 00:05:49.246 sys 0m1.759s 00:05:49.246 03:00:20 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.246 03:00:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.246 ************************************ 00:05:49.246 END TEST skip_rpc 00:05:49.246 ************************************ 00:05:49.504 03:00:20 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.504 03:00:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.504 03:00:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.504 03:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.504 ************************************ 00:05:49.504 START TEST rpc_client 00:05:49.504 ************************************ 00:05:49.504 03:00:20 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.504 * Looking for test storage... 00:05:49.504 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:49.504 03:00:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:49.504 OK 00:05:49.504 03:00:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.504 00:05:49.504 real 0m0.114s 00:05:49.504 user 0m0.052s 00:05:49.504 sys 0m0.070s 00:05:49.504 03:00:20 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.504 03:00:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.504 ************************************ 00:05:49.504 END TEST rpc_client 00:05:49.504 ************************************ 00:05:49.505 03:00:20 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.505 03:00:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.505 03:00:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.505 03:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.505 ************************************ 00:05:49.505 START TEST json_config 00:05:49.505 ************************************ 00:05:49.505 03:00:20 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:49.772 03:00:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.772 03:00:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.772 03:00:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.772 03:00:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.772 03:00:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.772 03:00:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.772 03:00:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.772 03:00:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@47 -- # : 0 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.772 03:00:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.772 03:00:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:49.773 INFO: JSON configuration test init 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.773 03:00:20 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.773 03:00:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:49.773 03:00:20 json_config -- json_config/common.sh@10 -- # shift 00:05:49.773 03:00:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.773 03:00:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.773 03:00:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.773 03:00:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.773 03:00:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.773 03:00:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3996944 00:05:49.773 03:00:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.773 Waiting for target to run... 00:05:49.773 03:00:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.773 03:00:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3996944 /var/tmp/spdk_tgt.sock 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@827 -- # '[' -z 3996944 ']' 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.773 03:00:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.773 [2024-05-15 03:00:20.809265] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:49.773 [2024-05-15 03:00:20.809327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996944 ] 00:05:50.033 [2024-05-15 03:00:21.136870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.291 [2024-05-15 03:00:21.223322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.859 03:00:21 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.859 03:00:21 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:50.859 03:00:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.859 00:05:50.859 03:00:21 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:50.859 03:00:21 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:50.859 03:00:21 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.859 03:00:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 03:00:21 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:05:50.859 03:00:21 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:50.860 03:00:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:50.860 03:00:22 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:50.860 03:00:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:51.126 [2024-05-15 03:00:22.230270] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:51.126 03:00:22 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:51.126 03:00:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:51.385 [2024-05-15 03:00:22.474902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:51.386 03:00:22 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:51.386 03:00:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.386 03:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.386 03:00:22 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.386 03:00:22 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:51.386 03:00:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:51.644 [2024-05-15 03:00:22.780920] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:56.914 03:00:27 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.914 03:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:56.914 03:00:27 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:56.914 03:00:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:57.171 03:00:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.171 03:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:05:57.171 03:00:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.171 03:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:05:57.171 03:00:28 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:57.171 03:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:05:57.428 03:00:28 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:57.428 03:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:57.686 Nvme0n1p0 Nvme0n1p1 00:05:57.686 03:00:28 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:57.686 03:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:57.944 [2024-05-15 03:00:28.883708] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:57.944 [2024-05-15 03:00:28.883758] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:57.944 00:05:57.944 03:00:28 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:57.944 03:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:57.944 Malloc3 00:05:57.944 03:00:29 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:57.944 03:00:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:58.201 [2024-05-15 03:00:29.288881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:58.201 [2024-05-15 03:00:29.288925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.201 [2024-05-15 03:00:29.288941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1666130 00:05:58.201 [2024-05-15 03:00:29.288951] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.201 [2024-05-15 03:00:29.290683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.201 [2024-05-15 03:00:29.290709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:58.201 PTBdevFromMalloc3 00:05:58.201 03:00:29 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:58.201 03:00:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:58.459 Null0 00:05:58.459 03:00:29 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:58.459 03:00:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:58.718 Malloc0 00:05:58.718 03:00:29 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:58.718 03:00:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:58.975 Malloc1 00:05:58.975 03:00:29 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:58.975 03:00:29 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:59.233 102400+0 records in 00:05:59.233 102400+0 records out 00:05:59.233 104857600 bytes (105 MB, 100 MiB) copied, 0.16711 s, 627 MB/s 00:05:59.233 03:00:30 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:59.233 03:00:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:59.490 aio_disk 00:05:59.490 03:00:30 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:59.490 03:00:30 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:59.490 03:00:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:02.019 fe4e1677-42a3-4a75-a950-938d9e5e42ba 00:06:02.019 03:00:32 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:02.019 03:00:32 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:02.019 03:00:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:02.019 03:00:33 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:02.019 03:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:02.277 03:00:33 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:02.277 03:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:02.534 03:00:33 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:02.534 03:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:02.791 03:00:33 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:06:02.791 03:00:33 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:02.791 03:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:03.049 MallocForCryptoBdev 00:06:03.049 03:00:34 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:06:03.049 03:00:34 json_config -- json_config/json_config.sh@159 -- # wc -l 00:06:03.049 03:00:34 json_config -- json_config/json_config.sh@159 -- # [[ 3 -eq 0 ]] 00:06:03.049 03:00:34 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:06:03.049 03:00:34 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:03.049 03:00:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:03.308 [2024-05-15 03:00:34.375148] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:03.308 CryptoMallocBdev 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@71 -- # sort 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@72 -- # sort 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:03.308 03:00:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:03.308 03:00:34 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e bdev_register:aio_disk bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 bdev_register:CryptoMallocBdev bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\9\7\0\2\f\a\c\-\9\9\b\8\-\4\8\f\e\-\b\1\8\b\-\c\8\4\9\c\f\2\c\3\7\4\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\d\7\4\1\2\e\a\-\9\e\4\3\-\4\d\b\b\-\b\2\7\d\-\a\1\4\6\c\0\8\7\6\a\4\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\b\e\a\e\e\7\e\-\6\1\6\2\-\4\f\c\f\-\9\f\b\8\-\d\2\8\1\2\1\d\9\4\c\4\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\4\3\a\3\c\e\1\-\0\b\c\6\-\4\d\8\e\-\8\a\8\d\-\4\a\8\7\8\3\6\5\8\0\3\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@86 -- # cat 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e bdev_register:aio_disk bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 bdev_register:CryptoMallocBdev bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:03.567 Expected events matched: 00:06:03.567 bdev_register:09702fac-99b8-48fe-b18b-c849cf2c3743 00:06:03.567 bdev_register:6d7412ea-9e43-4dbb-b27d-a146c0876a4e 00:06:03.567 bdev_register:aio_disk 00:06:03.567 bdev_register:cbeaee7e-6162-4fcf-9fb8-d28121d94c43 00:06:03.567 bdev_register:CryptoMallocBdev 00:06:03.567 bdev_register:f43a3ce1-0bc6-4d8e-8a8d-4a8783658037 00:06:03.567 bdev_register:Malloc0 00:06:03.567 bdev_register:Malloc0p0 00:06:03.567 bdev_register:Malloc0p1 00:06:03.567 bdev_register:Malloc0p2 00:06:03.567 bdev_register:Malloc1 00:06:03.567 bdev_register:Malloc3 00:06:03.567 bdev_register:MallocForCryptoBdev 00:06:03.567 bdev_register:Null0 00:06:03.567 bdev_register:Nvme0n1 00:06:03.567 bdev_register:Nvme0n1p0 00:06:03.567 bdev_register:Nvme0n1p1 00:06:03.567 bdev_register:PTBdevFromMalloc3 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:03.567 03:00:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.567 03:00:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:03.567 03:00:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:03.567 03:00:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.567 03:00:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.825 03:00:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:03.825 03:00:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.825 03:00:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.085 MallocBdevForConfigChangeCheck 00:06:04.085 03:00:35 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:04.085 03:00:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.085 03:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.085 03:00:35 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:04.085 03:00:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.365 03:00:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:04.365 INFO: shutting down applications... 00:06:04.365 03:00:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:04.365 03:00:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:04.365 03:00:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:04.365 03:00:35 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.622 [2024-05-15 03:00:35.619082] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:05.998 Calling clear_iscsi_subsystem 00:06:05.998 Calling clear_nvmf_subsystem 00:06:05.998 Calling clear_nbd_subsystem 00:06:05.998 Calling clear_ublk_subsystem 00:06:05.998 Calling clear_vhost_blk_subsystem 00:06:05.998 Calling clear_vhost_scsi_subsystem 00:06:05.998 Calling clear_bdev_subsystem 00:06:05.998 03:00:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.998 03:00:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:05.999 03:00:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:05.999 03:00:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.999 03:00:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.999 03:00:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:06.565 03:00:37 json_config -- json_config/json_config.sh@345 -- # break 00:06:06.565 03:00:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:06.565 03:00:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:06.565 03:00:37 json_config -- json_config/common.sh@31 -- # local app=target 00:06:06.565 03:00:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.565 03:00:37 json_config -- json_config/common.sh@35 -- # [[ -n 3996944 ]] 00:06:06.565 03:00:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3996944 00:06:06.565 03:00:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.565 03:00:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.565 03:00:37 json_config -- json_config/common.sh@41 -- # kill -0 3996944 00:06:06.565 03:00:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.823 03:00:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.823 03:00:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.823 03:00:37 json_config -- json_config/common.sh@41 -- # kill -0 3996944 00:06:06.823 03:00:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.823 03:00:37 json_config -- json_config/common.sh@43 -- # break 00:06:06.823 03:00:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.823 03:00:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.823 SPDK target shutdown done 00:06:06.823 03:00:37 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:06.823 INFO: relaunching applications... 00:06:06.823 03:00:37 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.823 03:00:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.823 03:00:37 json_config -- json_config/common.sh@10 -- # shift 00:06:06.823 03:00:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.823 03:00:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.823 03:00:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.823 03:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.823 03:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.823 03:00:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4000041 00:06:06.823 03:00:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.823 Waiting for target to run... 00:06:06.823 03:00:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.823 03:00:37 json_config -- json_config/common.sh@25 -- # waitforlisten 4000041 /var/tmp/spdk_tgt.sock 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@827 -- # '[' -z 4000041 ']' 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.823 03:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.081 [2024-05-15 03:00:38.019147] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:07.081 [2024-05-15 03:00:38.019210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000041 ] 00:06:07.648 [2024-05-15 03:00:38.523823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.648 [2024-05-15 03:00:38.619481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.648 [2024-05-15 03:00:38.665644] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:07.648 [2024-05-15 03:00:38.673681] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:07.648 [2024-05-15 03:00:38.681699] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:07.648 [2024-05-15 03:00:38.763242] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:10.186 [2024-05-15 03:00:41.034538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.186 [2024-05-15 03:00:41.034592] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:10.186 [2024-05-15 03:00:41.034604] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:10.186 [2024-05-15 03:00:41.042553] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:10.186 [2024-05-15 03:00:41.042577] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:10.186 [2024-05-15 03:00:41.050569] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:10.186 [2024-05-15 03:00:41.050591] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:10.186 [2024-05-15 03:00:41.058603] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:10.186 [2024-05-15 03:00:41.058627] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:10.186 [2024-05-15 03:00:41.058637] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:13.472 [2024-05-15 03:00:43.928039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:13.472 [2024-05-15 03:00:43.928084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.472 [2024-05-15 03:00:43.928100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26164d0 00:06:13.472 [2024-05-15 03:00:43.928109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.472 [2024-05-15 03:00:43.928400] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.472 [2024-05-15 03:00:43.928420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:13.472 03:00:44 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.472 03:00:44 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:13.472 03:00:44 json_config -- json_config/common.sh@26 -- # echo '' 00:06:13.472 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:13.472 INFO: Checking if target configuration is the same... 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:13.472 03:00:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.472 + '[' 2 -ne 2 ']' 00:06:13.472 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:13.472 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:13.472 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:13.472 +++ basename /dev/fd/62 00:06:13.472 ++ mktemp /tmp/62.XXX 00:06:13.472 + tmp_file_1=/tmp/62.Bb2 00:06:13.472 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.472 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.472 + tmp_file_2=/tmp/spdk_tgt_config.json.LAF 00:06:13.472 + ret=0 00:06:13.472 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.472 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.472 + diff -u /tmp/62.Bb2 /tmp/spdk_tgt_config.json.LAF 00:06:13.472 + echo 'INFO: JSON config files are the same' 00:06:13.472 INFO: JSON config files are the same 00:06:13.472 + rm /tmp/62.Bb2 /tmp/spdk_tgt_config.json.LAF 00:06:13.472 + exit 0 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:13.472 INFO: changing configuration and checking if this can be detected... 00:06:13.472 03:00:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.472 03:00:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.731 03:00:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.731 03:00:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:13.731 03:00:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.731 + '[' 2 -ne 2 ']' 00:06:13.731 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:13.731 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:13.731 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:13.731 +++ basename /dev/fd/62 00:06:13.731 ++ mktemp /tmp/62.XXX 00:06:13.731 + tmp_file_1=/tmp/62.nbB 00:06:13.731 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.731 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.731 + tmp_file_2=/tmp/spdk_tgt_config.json.MtR 00:06:13.731 + ret=0 00:06:13.731 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.297 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.297 + diff -u /tmp/62.nbB /tmp/spdk_tgt_config.json.MtR 00:06:14.297 + ret=1 00:06:14.297 + echo '=== Start of file: /tmp/62.nbB ===' 00:06:14.297 + cat /tmp/62.nbB 00:06:14.297 + echo '=== End of file: /tmp/62.nbB ===' 00:06:14.297 + echo '' 00:06:14.297 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MtR ===' 00:06:14.297 + cat /tmp/spdk_tgt_config.json.MtR 00:06:14.297 + echo '=== End of file: /tmp/spdk_tgt_config.json.MtR ===' 00:06:14.297 + echo '' 00:06:14.297 + rm /tmp/62.nbB /tmp/spdk_tgt_config.json.MtR 00:06:14.297 + exit 1 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:14.297 INFO: configuration change detected. 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:14.297 03:00:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.297 03:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@317 -- # [[ -n 4000041 ]] 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:14.297 03:00:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.297 03:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:14.297 03:00:45 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:14.297 03:00:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:14.556 03:00:45 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:14.556 03:00:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:14.815 03:00:45 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:14.815 03:00:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:15.073 03:00:46 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:15.073 03:00:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.331 03:00:46 json_config -- json_config/json_config.sh@323 -- # killprocess 4000041 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@946 -- # '[' -z 4000041 ']' 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@950 -- # kill -0 4000041 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@951 -- # uname 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4000041 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4000041' 00:06:15.331 killing process with pid 4000041 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@965 -- # kill 4000041 00:06:15.331 03:00:46 json_config -- common/autotest_common.sh@970 -- # wait 4000041 00:06:17.234 03:00:48 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.234 03:00:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:17.234 03:00:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.234 03:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.234 03:00:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:17.234 03:00:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:17.234 INFO: Success 00:06:17.234 00:06:17.234 real 0m27.541s 00:06:17.234 user 0m33.123s 00:06:17.234 sys 0m3.299s 00:06:17.234 03:00:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.234 03:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.234 ************************************ 00:06:17.234 END TEST json_config 00:06:17.234 ************************************ 00:06:17.234 03:00:48 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.234 03:00:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.234 03:00:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.234 03:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:17.234 ************************************ 00:06:17.234 START TEST json_config_extra_key 00:06:17.234 ************************************ 00:06:17.234 03:00:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:17.234 03:00:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.234 03:00:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.234 03:00:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.234 03:00:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.234 03:00:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.234 03:00:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.234 03:00:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.234 03:00:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.234 03:00:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.234 INFO: launching applications... 00:06:17.234 03:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.234 03:00:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.234 03:00:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4001970 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.235 Waiting for target to run... 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4001970 /var/tmp/spdk_tgt.sock 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 4001970 ']' 00:06:17.235 03:00:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.235 03:00:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.493 [2024-05-15 03:00:48.408125] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:17.493 [2024-05-15 03:00:48.408190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001970 ] 00:06:17.751 [2024-05-15 03:00:48.900973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.010 [2024-05-15 03:00:49.008087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.269 03:00:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.269 03:00:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.269 00:06:18.269 03:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.269 INFO: shutting down applications... 00:06:18.269 03:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4001970 ]] 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4001970 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4001970 00:06:18.269 03:00:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4001970 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.840 03:00:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.840 SPDK target shutdown done 00:06:18.840 03:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:18.840 Success 00:06:18.840 00:06:18.840 real 0m1.609s 00:06:18.840 user 0m1.220s 00:06:18.840 sys 0m0.604s 00:06:18.840 03:00:49 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.840 03:00:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.840 ************************************ 00:06:18.840 END TEST json_config_extra_key 00:06:18.840 ************************************ 00:06:18.840 03:00:49 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:18.840 03:00:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.840 03:00:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.840 03:00:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.840 ************************************ 00:06:18.840 START TEST alias_rpc 00:06:18.840 ************************************ 00:06:18.840 03:00:49 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.100 * Looking for test storage... 00:06:19.100 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:19.100 03:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.100 03:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4002245 00:06:19.100 03:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4002245 00:06:19.100 03:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 4002245 ']' 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.100 03:00:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.100 [2024-05-15 03:00:50.078268] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:19.100 [2024-05-15 03:00:50.078331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002245 ] 00:06:19.100 [2024-05-15 03:00:50.176921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.359 [2024-05-15 03:00:50.268313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.926 03:00:51 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.926 03:00:51 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:19.926 03:00:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:20.184 03:00:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4002245 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 4002245 ']' 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 4002245 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002245 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002245' 00:06:20.184 killing process with pid 4002245 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 4002245 00:06:20.184 03:00:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 4002245 00:06:20.751 00:06:20.751 real 0m1.693s 00:06:20.751 user 0m1.893s 00:06:20.751 sys 0m0.462s 00:06:20.751 03:00:51 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.751 03:00:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 ************************************ 00:06:20.752 END TEST alias_rpc 00:06:20.752 ************************************ 00:06:20.752 03:00:51 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:20.752 03:00:51 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.752 03:00:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.752 03:00:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.752 03:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 ************************************ 00:06:20.752 START TEST spdkcli_tcp 00:06:20.752 ************************************ 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.752 * Looking for test storage... 00:06:20.752 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4002532 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4002532 00:06:20.752 03:00:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 4002532 ']' 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.752 03:00:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 [2024-05-15 03:00:51.854934] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:20.752 [2024-05-15 03:00:51.854990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002532 ] 00:06:21.011 [2024-05-15 03:00:51.953739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.011 [2024-05-15 03:00:52.046403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.011 [2024-05-15 03:00:52.046409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.578 03:00:52 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.578 03:00:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:21.578 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.578 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4002760 00:06:21.578 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.874 [ 00:06:21.874 "bdev_malloc_delete", 00:06:21.874 "bdev_malloc_create", 00:06:21.874 "bdev_null_resize", 00:06:21.874 "bdev_null_delete", 00:06:21.874 "bdev_null_create", 00:06:21.874 "bdev_nvme_cuse_unregister", 00:06:21.874 "bdev_nvme_cuse_register", 00:06:21.874 "bdev_opal_new_user", 00:06:21.874 "bdev_opal_set_lock_state", 00:06:21.874 "bdev_opal_delete", 00:06:21.874 "bdev_opal_get_info", 00:06:21.874 "bdev_opal_create", 00:06:21.874 "bdev_nvme_opal_revert", 00:06:21.874 "bdev_nvme_opal_init", 00:06:21.874 "bdev_nvme_send_cmd", 00:06:21.874 "bdev_nvme_get_path_iostat", 00:06:21.874 "bdev_nvme_get_mdns_discovery_info", 00:06:21.874 "bdev_nvme_stop_mdns_discovery", 00:06:21.874 "bdev_nvme_start_mdns_discovery", 00:06:21.874 "bdev_nvme_set_multipath_policy", 00:06:21.874 "bdev_nvme_set_preferred_path", 00:06:21.874 "bdev_nvme_get_io_paths", 00:06:21.875 "bdev_nvme_remove_error_injection", 00:06:21.875 "bdev_nvme_add_error_injection", 00:06:21.875 "bdev_nvme_get_discovery_info", 00:06:21.875 "bdev_nvme_stop_discovery", 00:06:21.875 "bdev_nvme_start_discovery", 00:06:21.875 "bdev_nvme_get_controller_health_info", 00:06:21.875 "bdev_nvme_disable_controller", 00:06:21.875 "bdev_nvme_enable_controller", 00:06:21.875 "bdev_nvme_reset_controller", 00:06:21.875 "bdev_nvme_get_transport_statistics", 00:06:21.875 "bdev_nvme_apply_firmware", 00:06:21.875 "bdev_nvme_detach_controller", 00:06:21.875 "bdev_nvme_get_controllers", 00:06:21.875 "bdev_nvme_attach_controller", 00:06:21.875 "bdev_nvme_set_hotplug", 00:06:21.875 "bdev_nvme_set_options", 00:06:21.875 "bdev_passthru_delete", 00:06:21.875 "bdev_passthru_create", 00:06:21.875 "bdev_lvol_check_shallow_copy", 00:06:21.875 "bdev_lvol_start_shallow_copy", 00:06:21.875 "bdev_lvol_grow_lvstore", 00:06:21.875 "bdev_lvol_get_lvols", 00:06:21.875 "bdev_lvol_get_lvstores", 00:06:21.875 "bdev_lvol_delete", 00:06:21.875 "bdev_lvol_set_read_only", 00:06:21.875 "bdev_lvol_resize", 00:06:21.875 "bdev_lvol_decouple_parent", 00:06:21.875 "bdev_lvol_inflate", 00:06:21.875 "bdev_lvol_rename", 00:06:21.875 "bdev_lvol_clone_bdev", 00:06:21.875 "bdev_lvol_clone", 00:06:21.875 "bdev_lvol_snapshot", 00:06:21.875 "bdev_lvol_create", 00:06:21.875 "bdev_lvol_delete_lvstore", 00:06:21.875 "bdev_lvol_rename_lvstore", 00:06:21.875 "bdev_lvol_create_lvstore", 00:06:21.875 "bdev_raid_set_options", 00:06:21.875 "bdev_raid_remove_base_bdev", 00:06:21.875 "bdev_raid_add_base_bdev", 00:06:21.875 "bdev_raid_delete", 00:06:21.875 "bdev_raid_create", 00:06:21.875 "bdev_raid_get_bdevs", 00:06:21.875 "bdev_error_inject_error", 00:06:21.875 "bdev_error_delete", 00:06:21.875 "bdev_error_create", 00:06:21.875 "bdev_split_delete", 00:06:21.875 "bdev_split_create", 00:06:21.875 "bdev_delay_delete", 00:06:21.875 "bdev_delay_create", 00:06:21.875 "bdev_delay_update_latency", 00:06:21.875 "bdev_zone_block_delete", 00:06:21.875 "bdev_zone_block_create", 00:06:21.875 "blobfs_create", 00:06:21.875 "blobfs_detect", 00:06:21.875 "blobfs_set_cache_size", 00:06:21.875 "bdev_crypto_delete", 00:06:21.875 "bdev_crypto_create", 00:06:21.875 "bdev_compress_delete", 00:06:21.875 "bdev_compress_create", 00:06:21.875 "bdev_compress_get_orphans", 00:06:21.875 "bdev_aio_delete", 00:06:21.875 "bdev_aio_rescan", 00:06:21.875 "bdev_aio_create", 00:06:21.875 "bdev_ftl_set_property", 00:06:21.875 "bdev_ftl_get_properties", 00:06:21.875 "bdev_ftl_get_stats", 00:06:21.875 "bdev_ftl_unmap", 00:06:21.875 "bdev_ftl_unload", 00:06:21.875 "bdev_ftl_delete", 00:06:21.875 "bdev_ftl_load", 00:06:21.875 "bdev_ftl_create", 00:06:21.875 "bdev_virtio_attach_controller", 00:06:21.875 "bdev_virtio_scsi_get_devices", 00:06:21.875 "bdev_virtio_detach_controller", 00:06:21.875 "bdev_virtio_blk_set_hotplug", 00:06:21.875 "bdev_iscsi_delete", 00:06:21.875 "bdev_iscsi_create", 00:06:21.875 "bdev_iscsi_set_options", 00:06:21.875 "accel_error_inject_error", 00:06:21.875 "ioat_scan_accel_module", 00:06:21.875 "dsa_scan_accel_module", 00:06:21.875 "iaa_scan_accel_module", 00:06:21.875 "dpdk_cryptodev_get_driver", 00:06:21.875 "dpdk_cryptodev_set_driver", 00:06:21.875 "dpdk_cryptodev_scan_accel_module", 00:06:21.875 "compressdev_scan_accel_module", 00:06:21.875 "keyring_file_remove_key", 00:06:21.875 "keyring_file_add_key", 00:06:21.875 "iscsi_get_histogram", 00:06:21.875 "iscsi_enable_histogram", 00:06:21.875 "iscsi_set_options", 00:06:21.875 "iscsi_get_auth_groups", 00:06:21.875 "iscsi_auth_group_remove_secret", 00:06:21.875 "iscsi_auth_group_add_secret", 00:06:21.875 "iscsi_delete_auth_group", 00:06:21.875 "iscsi_create_auth_group", 00:06:21.875 "iscsi_set_discovery_auth", 00:06:21.875 "iscsi_get_options", 00:06:21.875 "iscsi_target_node_request_logout", 00:06:21.875 "iscsi_target_node_set_redirect", 00:06:21.875 "iscsi_target_node_set_auth", 00:06:21.875 "iscsi_target_node_add_lun", 00:06:21.875 "iscsi_get_stats", 00:06:21.875 "iscsi_get_connections", 00:06:21.875 "iscsi_portal_group_set_auth", 00:06:21.875 "iscsi_start_portal_group", 00:06:21.875 "iscsi_delete_portal_group", 00:06:21.875 "iscsi_create_portal_group", 00:06:21.875 "iscsi_get_portal_groups", 00:06:21.875 "iscsi_delete_target_node", 00:06:21.875 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.875 "iscsi_target_node_add_pg_ig_maps", 00:06:21.875 "iscsi_create_target_node", 00:06:21.875 "iscsi_get_target_nodes", 00:06:21.875 "iscsi_delete_initiator_group", 00:06:21.875 "iscsi_initiator_group_remove_initiators", 00:06:21.875 "iscsi_initiator_group_add_initiators", 00:06:21.875 "iscsi_create_initiator_group", 00:06:21.875 "iscsi_get_initiator_groups", 00:06:21.875 "nvmf_set_crdt", 00:06:21.875 "nvmf_set_config", 00:06:21.875 "nvmf_set_max_subsystems", 00:06:21.875 "nvmf_subsystem_get_listeners", 00:06:21.875 "nvmf_subsystem_get_qpairs", 00:06:21.875 "nvmf_subsystem_get_controllers", 00:06:21.875 "nvmf_get_stats", 00:06:21.875 "nvmf_get_transports", 00:06:21.875 "nvmf_create_transport", 00:06:21.875 "nvmf_get_targets", 00:06:21.875 "nvmf_delete_target", 00:06:21.875 "nvmf_create_target", 00:06:21.875 "nvmf_subsystem_allow_any_host", 00:06:21.875 "nvmf_subsystem_remove_host", 00:06:21.875 "nvmf_subsystem_add_host", 00:06:21.875 "nvmf_ns_remove_host", 00:06:21.875 "nvmf_ns_add_host", 00:06:21.875 "nvmf_subsystem_remove_ns", 00:06:21.875 "nvmf_subsystem_add_ns", 00:06:21.875 "nvmf_subsystem_listener_set_ana_state", 00:06:21.875 "nvmf_discovery_get_referrals", 00:06:21.875 "nvmf_discovery_remove_referral", 00:06:21.875 "nvmf_discovery_add_referral", 00:06:21.875 "nvmf_subsystem_remove_listener", 00:06:21.875 "nvmf_subsystem_add_listener", 00:06:21.875 "nvmf_delete_subsystem", 00:06:21.875 "nvmf_create_subsystem", 00:06:21.875 "nvmf_get_subsystems", 00:06:21.875 "env_dpdk_get_mem_stats", 00:06:21.875 "nbd_get_disks", 00:06:21.875 "nbd_stop_disk", 00:06:21.875 "nbd_start_disk", 00:06:21.875 "ublk_recover_disk", 00:06:21.875 "ublk_get_disks", 00:06:21.875 "ublk_stop_disk", 00:06:21.875 "ublk_start_disk", 00:06:21.875 "ublk_destroy_target", 00:06:21.875 "ublk_create_target", 00:06:21.875 "virtio_blk_create_transport", 00:06:21.875 "virtio_blk_get_transports", 00:06:21.875 "vhost_controller_set_coalescing", 00:06:21.875 "vhost_get_controllers", 00:06:21.875 "vhost_delete_controller", 00:06:21.875 "vhost_create_blk_controller", 00:06:21.875 "vhost_scsi_controller_remove_target", 00:06:21.875 "vhost_scsi_controller_add_target", 00:06:21.875 "vhost_start_scsi_controller", 00:06:21.875 "vhost_create_scsi_controller", 00:06:21.875 "thread_set_cpumask", 00:06:21.875 "framework_get_scheduler", 00:06:21.875 "framework_set_scheduler", 00:06:21.875 "framework_get_reactors", 00:06:21.875 "thread_get_io_channels", 00:06:21.875 "thread_get_pollers", 00:06:21.875 "thread_get_stats", 00:06:21.875 "framework_monitor_context_switch", 00:06:21.875 "spdk_kill_instance", 00:06:21.875 "log_enable_timestamps", 00:06:21.875 "log_get_flags", 00:06:21.875 "log_clear_flag", 00:06:21.875 "log_set_flag", 00:06:21.875 "log_get_level", 00:06:21.875 "log_set_level", 00:06:21.875 "log_get_print_level", 00:06:21.875 "log_set_print_level", 00:06:21.875 "framework_enable_cpumask_locks", 00:06:21.875 "framework_disable_cpumask_locks", 00:06:21.875 "framework_wait_init", 00:06:21.875 "framework_start_init", 00:06:21.875 "scsi_get_devices", 00:06:21.875 "bdev_get_histogram", 00:06:21.875 "bdev_enable_histogram", 00:06:21.875 "bdev_set_qos_limit", 00:06:21.875 "bdev_set_qd_sampling_period", 00:06:21.875 "bdev_get_bdevs", 00:06:21.875 "bdev_reset_iostat", 00:06:21.875 "bdev_get_iostat", 00:06:21.875 "bdev_examine", 00:06:21.875 "bdev_wait_for_examine", 00:06:21.875 "bdev_set_options", 00:06:21.875 "notify_get_notifications", 00:06:21.875 "notify_get_types", 00:06:21.875 "accel_get_stats", 00:06:21.875 "accel_set_options", 00:06:21.875 "accel_set_driver", 00:06:21.875 "accel_crypto_key_destroy", 00:06:21.875 "accel_crypto_keys_get", 00:06:21.875 "accel_crypto_key_create", 00:06:21.875 "accel_assign_opc", 00:06:21.875 "accel_get_module_info", 00:06:21.875 "accel_get_opc_assignments", 00:06:21.875 "vmd_rescan", 00:06:21.875 "vmd_remove_device", 00:06:21.875 "vmd_enable", 00:06:21.875 "sock_get_default_impl", 00:06:21.875 "sock_set_default_impl", 00:06:21.875 "sock_impl_set_options", 00:06:21.875 "sock_impl_get_options", 00:06:21.875 "iobuf_get_stats", 00:06:21.875 "iobuf_set_options", 00:06:21.875 "framework_get_pci_devices", 00:06:21.875 "framework_get_config", 00:06:21.875 "framework_get_subsystems", 00:06:21.875 "trace_get_info", 00:06:21.875 "trace_get_tpoint_group_mask", 00:06:21.875 "trace_disable_tpoint_group", 00:06:21.875 "trace_enable_tpoint_group", 00:06:21.875 "trace_clear_tpoint_mask", 00:06:21.875 "trace_set_tpoint_mask", 00:06:21.875 "keyring_get_keys", 00:06:21.875 "spdk_get_version", 00:06:21.875 "rpc_get_methods" 00:06:21.875 ] 00:06:21.875 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.875 03:00:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.875 03:00:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.875 03:00:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4002532 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 4002532 ']' 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 4002532 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002532 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002532' 00:06:21.876 killing process with pid 4002532 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 4002532 00:06:21.876 03:00:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 4002532 00:06:22.446 00:06:22.446 real 0m1.647s 00:06:22.446 user 0m2.959s 00:06:22.446 sys 0m0.483s 00:06:22.446 03:00:53 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.446 03:00:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.446 ************************************ 00:06:22.447 END TEST spdkcli_tcp 00:06:22.447 ************************************ 00:06:22.447 03:00:53 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.447 03:00:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.447 03:00:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.447 03:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.447 ************************************ 00:06:22.447 START TEST dpdk_mem_utility 00:06:22.447 ************************************ 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.447 * Looking for test storage... 00:06:22.447 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:22.447 03:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.447 03:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4002859 00:06:22.447 03:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4002859 00:06:22.447 03:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 4002859 ']' 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.447 03:00:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.447 [2024-05-15 03:00:53.571307] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:22.447 [2024-05-15 03:00:53.571373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002859 ] 00:06:22.713 [2024-05-15 03:00:53.671007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.713 [2024-05-15 03:00:53.768056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.655 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.655 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:23.655 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.655 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.655 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.655 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.655 { 00:06:23.655 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.655 } 00:06:23.655 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.655 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.655 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:23.655 1 heaps totaling size 814.000000 MiB 00:06:23.655 size: 814.000000 MiB heap id: 0 00:06:23.655 end heaps---------- 00:06:23.655 8 mempools totaling size 598.116089 MiB 00:06:23.655 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.655 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.655 size: 84.521057 MiB name: bdev_io_4002859 00:06:23.655 size: 51.011292 MiB name: evtpool_4002859 00:06:23.655 size: 50.003479 MiB name: msgpool_4002859 00:06:23.655 size: 21.763794 MiB name: PDU_Pool 00:06:23.655 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.655 size: 0.026123 MiB name: Session_Pool 00:06:23.655 end mempools------- 00:06:23.655 201 memzones totaling size 4.173523 MiB 00:06:23.655 size: 1.000366 MiB name: RG_ring_0_4002859 00:06:23.655 size: 1.000366 MiB name: RG_ring_1_4002859 00:06:23.655 size: 1.000366 MiB name: RG_ring_4_4002859 00:06:23.655 size: 1.000366 MiB name: RG_ring_5_4002859 00:06:23.655 size: 0.125366 MiB name: RG_ring_2_4002859 00:06:23.655 size: 0.015991 MiB name: RG_ring_3_4002859 00:06:23.655 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:01.7_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1a:02.7_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:01.7_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1c:02.7_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:01.7_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.0_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.1_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.2_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.3_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.4_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.5_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.6_qat 00:06:23.655 size: 0.000244 MiB name: 0000:1e:02.7_qat 00:06:23.655 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:23.655 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:23.655 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:23.655 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:23.655 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:23.655 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:23.655 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:23.655 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:23.656 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:23.656 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:23.656 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:23.656 end memzones------- 00:06:23.656 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.656 heap id: 0 total size: 814.000000 MiB number of busy elements: 645 number of free elements: 14 00:06:23.656 list of free elements. size: 11.782837 MiB 00:06:23.656 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:23.656 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:23.656 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:23.656 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:23.656 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:23.656 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:23.656 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:23.656 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:23.656 element at address: 0x20001aa00000 with size: 0.563843 MiB 00:06:23.656 element at address: 0x200003a00000 with size: 0.497253 MiB 00:06:23.656 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:23.656 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:23.656 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:23.656 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:23.656 list of standard malloc elements. size: 199.900085 MiB 00:06:23.656 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:23.656 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:23.656 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:23.656 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:23.656 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:23.656 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.656 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:23.656 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.656 element at address: 0x20000032c840 with size: 0.004395 MiB 00:06:23.656 element at address: 0x2000003302c0 with size: 0.004395 MiB 00:06:23.656 element at address: 0x200000333d40 with size: 0.004395 MiB 00:06:23.656 element at address: 0x2000003377c0 with size: 0.004395 MiB 00:06:23.656 element at address: 0x20000033b240 with size: 0.004395 MiB 00:06:23.656 element at address: 0x20000033ecc0 with size: 0.004395 MiB 00:06:23.656 element at address: 0x200000342740 with size: 0.004395 MiB 00:06:23.656 element at address: 0x2000003461c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000349c40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000034d6c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000351140 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000354bc0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000358640 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000035c0c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000035fb40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003635c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000367040 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000036aac0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000036e540 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000371fc0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000375a40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003794c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000037cf40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003809c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000384440 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000387ec0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000038b940 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000038f3c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x200000392e40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003968c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000039a340 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000039ddc0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003a1840 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003a52c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003a8d40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003ac7c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003b0240 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003b3cc0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003b7740 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003bb1c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003bec40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003c26c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003c6140 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003c9bc0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003cd640 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003d10c0 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003d4b40 with size: 0.004395 MiB 00:06:23.657 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:23.657 element at address: 0x20000032a740 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000032b7c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000032e1c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000032f240 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000331c40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000332cc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003356c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000336740 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000339140 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000033a1c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000033cbc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000033dc40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000340640 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003416c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003440c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000345140 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000347b40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000348bc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000034b5c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000034c640 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000034f040 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003500c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000352ac0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000353b40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000356540 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003575c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000359fc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000035b040 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000035da40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000035eac0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003614c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000362540 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000364f40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000365fc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003689c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000369a40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000036c440 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000036d4c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000036fec0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000370f40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000373940 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003749c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003773c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000378440 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000037ae40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000037bec0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000037e8c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000037f940 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000382340 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003833c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000385dc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000386e40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000389840 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000038a8c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000038d2c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000038e340 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000390d40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000391dc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003947c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000395840 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000398240 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003992c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000039bcc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000039cd40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x20000039f740 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003a07c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003a31c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003a4240 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003a6c40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003a7cc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003aa6c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003ab740 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003ae140 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003af1c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003b1bc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003b2c40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003b5640 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003b66c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003b90c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003ba140 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003bcb40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003bdbc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c05c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c1640 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c4040 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c50c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c7ac0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003c8b40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003cb540 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003cc5c0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003cefc0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003d0040 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003d2a40 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003d3ac0 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:23.657 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:23.657 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:23.657 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:23.657 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:23.657 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:23.657 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:23.657 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:06:23.657 element at address: 0x200000200f80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201040 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201100 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002011c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201280 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201340 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201400 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002014c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201580 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201640 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201700 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002017c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201880 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201940 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201a00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201ac0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000201cc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000205f80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002263c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226480 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226540 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226600 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002266c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226780 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226840 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226900 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002269c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226a80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226b40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226c00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226cc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226d80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226e40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000226f00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002271c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227280 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227340 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227400 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002274c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227580 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227640 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227700 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000002277c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227880 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227940 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227a00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227ac0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227b80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227c40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000227d00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000329f00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000329fc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032a180 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032a340 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032a400 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032da40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032dc00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032ddc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000032de80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003314c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000331680 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000331840 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000331900 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000334f40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000335100 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003352c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000335380 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003389c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000338b80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000338d40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000338e00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000033c440 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000033c600 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000033c7c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000033c880 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000033fec0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000340080 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000340240 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000340300 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000343940 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000343b00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000343cc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000343d80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003473c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000347580 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000347740 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000347800 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034ae40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034b000 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034b1c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034b280 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034e8c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034ea80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034ec40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000034ed00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000352340 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000352500 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003526c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000352780 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000355dc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000355f80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000356140 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000356200 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000359840 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000359a00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000359bc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000359c80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000035d2c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000035d480 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000035d640 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000035d700 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000360d40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000360f00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003610c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000361180 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003647c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000364980 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000364b40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000364c00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000368240 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000368400 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003685c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000368680 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036bcc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036be80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036c040 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036c100 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036f740 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036f900 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036fac0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000036fb80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x2000003731c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000373380 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000373540 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000373600 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000376c40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000376e00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000376fc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000377080 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037a6c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037a880 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037aa40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037ab00 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037e140 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037e300 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037e4c0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x20000037e580 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000381bc0 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000381d80 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000381f40 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000382000 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000385640 with size: 0.000183 MiB 00:06:23.658 element at address: 0x200000385800 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003859c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000385a80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003890c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000389280 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000389440 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000389500 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000038cb40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000038cd00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000038cec0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000038cf80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003905c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000390780 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000390940 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000390a00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000394040 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000394200 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003943c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000394480 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000397ac0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000397c80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000397e40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200000397f00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039b700 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039b8c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039b980 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039efc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039f180 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039f340 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000039f400 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a2a40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a2c00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a2dc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a2e80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a64c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a6680 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a6840 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a6900 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003a9f40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003aa100 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003aa2c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003aa380 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003ad9c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003adb80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003add40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003ade00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b1440 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b1600 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b17c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b1880 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b5240 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b5300 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b8940 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b8b00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b8cc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003b8d80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003bc3c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003bc580 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003bc740 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003bc800 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c0000 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c01c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c0280 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c38c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c3a80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c3c40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c3d00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c7340 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c7500 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c76c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003c7780 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cadc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003caf80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cb140 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cb200 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003ce840 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003cec80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d22c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d2480 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d2640 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d2700 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d5e80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d6100 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d6800 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000003d68c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:23.659 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90580 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90640 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90700 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa907c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90880 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:23.659 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:23.660 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:23.660 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:23.661 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:23.661 list of memzone associated elements. size: 602.317078 MiB 00:06:23.661 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:23.661 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.661 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:23.661 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.661 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:23.661 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4002859_0 00:06:23.661 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:23.661 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4002859_0 00:06:23.661 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:23.661 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4002859_0 00:06:23.661 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:23.661 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.661 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:23.661 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.661 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:23.661 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4002859 00:06:23.661 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:23.661 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4002859 00:06:23.661 element at address: 0x200000227dc0 with size: 1.008118 MiB 00:06:23.661 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4002859 00:06:23.661 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:23.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.661 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:23.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.661 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:23.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.661 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:23.661 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.661 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:23.661 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4002859 00:06:23.661 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:23.661 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4002859 00:06:23.661 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:23.661 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4002859 00:06:23.661 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:23.661 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4002859 00:06:23.661 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:23.661 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4002859 00:06:23.661 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:23.661 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.661 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:23.661 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.661 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:23.661 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.661 element at address: 0x200000206040 with size: 0.125488 MiB 00:06:23.661 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4002859 00:06:23.661 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:23.661 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.661 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:23.661 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.661 element at address: 0x200000201d80 with size: 0.016113 MiB 00:06:23.661 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4002859 00:06:23.661 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:23.661 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.661 element at address: 0x2000003d62c0 with size: 0.001282 MiB 00:06:23.661 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:23.661 element at address: 0x2000003d6a80 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.0_qat 00:06:23.661 element at address: 0x2000003d28c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.1_qat 00:06:23.661 element at address: 0x2000003cee40 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.2_qat 00:06:23.661 element at address: 0x2000003cb3c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.3_qat 00:06:23.661 element at address: 0x2000003c7940 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.4_qat 00:06:23.661 element at address: 0x2000003c3ec0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.5_qat 00:06:23.661 element at address: 0x2000003c0440 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.6_qat 00:06:23.661 element at address: 0x2000003bc9c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.7_qat 00:06:23.661 element at address: 0x2000003b8f40 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.0_qat 00:06:23.661 element at address: 0x2000003b54c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.1_qat 00:06:23.661 element at address: 0x2000003b1a40 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.2_qat 00:06:23.661 element at address: 0x2000003adfc0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.3_qat 00:06:23.661 element at address: 0x2000003aa540 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.4_qat 00:06:23.661 element at address: 0x2000003a6ac0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.5_qat 00:06:23.661 element at address: 0x2000003a3040 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.6_qat 00:06:23.661 element at address: 0x20000039f5c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.7_qat 00:06:23.661 element at address: 0x20000039bb40 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.0_qat 00:06:23.661 element at address: 0x2000003980c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.1_qat 00:06:23.661 element at address: 0x200000394640 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.2_qat 00:06:23.661 element at address: 0x200000390bc0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.3_qat 00:06:23.661 element at address: 0x20000038d140 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.4_qat 00:06:23.661 element at address: 0x2000003896c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.5_qat 00:06:23.661 element at address: 0x200000385c40 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.6_qat 00:06:23.661 element at address: 0x2000003821c0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.7_qat 00:06:23.661 element at address: 0x20000037e740 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.0_qat 00:06:23.661 element at address: 0x20000037acc0 with size: 0.000366 MiB 00:06:23.661 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.1_qat 00:06:23.661 element at address: 0x200000377240 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.2_qat 00:06:23.662 element at address: 0x2000003737c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.3_qat 00:06:23.662 element at address: 0x20000036fd40 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.4_qat 00:06:23.662 element at address: 0x20000036c2c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.5_qat 00:06:23.662 element at address: 0x200000368840 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.6_qat 00:06:23.662 element at address: 0x200000364dc0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.7_qat 00:06:23.662 element at address: 0x200000361340 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.0_qat 00:06:23.662 element at address: 0x20000035d8c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.1_qat 00:06:23.662 element at address: 0x200000359e40 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.2_qat 00:06:23.662 element at address: 0x2000003563c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.3_qat 00:06:23.662 element at address: 0x200000352940 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.4_qat 00:06:23.662 element at address: 0x20000034eec0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.5_qat 00:06:23.662 element at address: 0x20000034b440 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.6_qat 00:06:23.662 element at address: 0x2000003479c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.7_qat 00:06:23.662 element at address: 0x200000343f40 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.0_qat 00:06:23.662 element at address: 0x2000003404c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.1_qat 00:06:23.662 element at address: 0x20000033ca40 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.2_qat 00:06:23.662 element at address: 0x200000338fc0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.3_qat 00:06:23.662 element at address: 0x200000335540 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.4_qat 00:06:23.662 element at address: 0x200000331ac0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.5_qat 00:06:23.662 element at address: 0x20000032e040 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.6_qat 00:06:23.662 element at address: 0x20000032a5c0 with size: 0.000366 MiB 00:06:23.662 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.7_qat 00:06:23.662 element at address: 0x2000003d5d40 with size: 0.000305 MiB 00:06:23.662 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:23.662 element at address: 0x200000226fc0 with size: 0.000305 MiB 00:06:23.662 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4002859 00:06:23.662 element at address: 0x200000201b80 with size: 0.000305 MiB 00:06:23.662 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4002859 00:06:23.662 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:23.662 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.662 element at address: 0x2000003d6980 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:23.662 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:23.662 element at address: 0x2000003d5f40 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:23.662 element at address: 0x2000003d27c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:23.662 element at address: 0x2000003d2540 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:23.662 element at address: 0x2000003d2380 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:23.662 element at address: 0x2000003ced40 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:23.662 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:23.662 element at address: 0x2000003ce900 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:23.662 element at address: 0x2000003cb2c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:23.662 element at address: 0x2000003cb040 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:23.662 element at address: 0x2000003cae80 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:23.662 element at address: 0x2000003c7840 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:23.662 element at address: 0x2000003c75c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:23.662 element at address: 0x2000003c7400 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:23.662 element at address: 0x2000003c3dc0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:23.662 element at address: 0x2000003c3b40 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:23.662 element at address: 0x2000003c3980 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:23.662 element at address: 0x2000003c0340 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:23.662 element at address: 0x2000003c00c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:23.662 element at address: 0x2000003bff00 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:23.662 element at address: 0x2000003bc8c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:23.662 element at address: 0x2000003bc640 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:23.662 element at address: 0x2000003bc480 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:23.662 element at address: 0x2000003b8e40 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:23.662 element at address: 0x2000003b8bc0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:23.662 element at address: 0x2000003b8a00 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:23.662 element at address: 0x2000003b53c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:23.662 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:23.662 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:23.662 element at address: 0x2000003b1940 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:23.662 element at address: 0x2000003b16c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:23.662 element at address: 0x2000003b1500 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:23.662 element at address: 0x2000003adec0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:23.662 element at address: 0x2000003adc40 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:23.662 element at address: 0x2000003ada80 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:23.662 element at address: 0x2000003aa440 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:23.662 element at address: 0x2000003aa1c0 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:23.662 element at address: 0x2000003aa000 with size: 0.000244 MiB 00:06:23.662 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:23.662 element at address: 0x2000003a69c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:23.663 element at address: 0x2000003a6740 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:23.663 element at address: 0x2000003a6580 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:23.663 element at address: 0x2000003a2f40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:23.663 element at address: 0x2000003a2cc0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:23.663 element at address: 0x2000003a2b00 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:23.663 element at address: 0x20000039f4c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:23.663 element at address: 0x20000039f240 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:23.663 element at address: 0x20000039f080 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:23.663 element at address: 0x20000039ba40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:23.663 element at address: 0x20000039b7c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:23.663 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:23.663 element at address: 0x200000397fc0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:23.663 element at address: 0x200000397d40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:23.663 element at address: 0x200000397b80 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:23.663 element at address: 0x200000394540 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:23.663 element at address: 0x2000003942c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:23.663 element at address: 0x200000394100 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:23.663 element at address: 0x200000390ac0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:23.663 element at address: 0x200000390840 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:23.663 element at address: 0x200000390680 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:23.663 element at address: 0x20000038d040 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:23.663 element at address: 0x20000038cdc0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:23.663 element at address: 0x20000038cc00 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:23.663 element at address: 0x2000003895c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:23.663 element at address: 0x200000389340 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:23.663 element at address: 0x200000389180 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:23.663 element at address: 0x200000385b40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:23.663 element at address: 0x2000003858c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:23.663 element at address: 0x200000385700 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:23.663 element at address: 0x2000003820c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:23.663 element at address: 0x200000381e40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:23.663 element at address: 0x200000381c80 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:23.663 element at address: 0x20000037e640 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:23.663 element at address: 0x20000037e3c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:23.663 element at address: 0x20000037e200 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:23.663 element at address: 0x20000037abc0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:23.663 element at address: 0x20000037a940 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:23.663 element at address: 0x20000037a780 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:23.663 element at address: 0x200000377140 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:23.663 element at address: 0x200000376ec0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:23.663 element at address: 0x200000376d00 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:23.663 element at address: 0x2000003736c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:23.663 element at address: 0x200000373440 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:23.663 element at address: 0x200000373280 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:23.663 element at address: 0x20000036fc40 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:23.663 element at address: 0x20000036f9c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:23.663 element at address: 0x20000036f800 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:23.663 element at address: 0x20000036c1c0 with size: 0.000244 MiB 00:06:23.663 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:23.663 element at address: 0x20000036bf40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:23.664 element at address: 0x20000036bd80 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:23.664 element at address: 0x200000368740 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:23.664 element at address: 0x2000003684c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:23.664 element at address: 0x200000368300 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:23.664 element at address: 0x200000364cc0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:23.664 element at address: 0x200000364a40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:23.664 element at address: 0x200000364880 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:23.664 element at address: 0x200000361240 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:23.664 element at address: 0x200000360fc0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:23.664 element at address: 0x200000360e00 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:23.664 element at address: 0x20000035d7c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:23.664 element at address: 0x20000035d540 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:23.664 element at address: 0x20000035d380 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:23.664 element at address: 0x200000359d40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:23.664 element at address: 0x200000359ac0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:23.664 element at address: 0x200000359900 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:23.664 element at address: 0x2000003562c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:23.664 element at address: 0x200000356040 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:23.664 element at address: 0x200000355e80 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:23.664 element at address: 0x200000352840 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:23.664 element at address: 0x2000003525c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:23.664 element at address: 0x200000352400 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:23.664 element at address: 0x20000034edc0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:23.664 element at address: 0x20000034eb40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:23.664 element at address: 0x20000034e980 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:23.664 element at address: 0x20000034b340 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:23.664 element at address: 0x20000034b0c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:23.664 element at address: 0x20000034af00 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:23.664 element at address: 0x2000003478c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:23.664 element at address: 0x200000347640 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:23.664 element at address: 0x200000347480 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:23.664 element at address: 0x200000343e40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:23.664 element at address: 0x200000343bc0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:23.664 element at address: 0x200000343a00 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:23.664 element at address: 0x2000003403c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:23.664 element at address: 0x200000340140 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:23.664 element at address: 0x20000033ff80 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:23.664 element at address: 0x20000033c940 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:23.664 element at address: 0x20000033c6c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:23.664 element at address: 0x20000033c500 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:23.664 element at address: 0x200000338ec0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:23.664 element at address: 0x200000338c40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:23.664 element at address: 0x200000338a80 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:23.664 element at address: 0x200000335440 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:23.664 element at address: 0x2000003351c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:23.664 element at address: 0x200000335000 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:23.664 element at address: 0x2000003319c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:23.664 element at address: 0x200000331740 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:23.664 element at address: 0x200000331580 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:23.664 element at address: 0x20000032df40 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:23.664 element at address: 0x20000032dcc0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:23.664 element at address: 0x20000032db00 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:23.664 element at address: 0x20000032a4c0 with size: 0.000244 MiB 00:06:23.664 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:23.665 element at address: 0x20000032a240 with size: 0.000244 MiB 00:06:23.665 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:23.665 element at address: 0x20000032a080 with size: 0.000244 MiB 00:06:23.665 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:23.665 element at address: 0x2000003d6040 with size: 0.000183 MiB 00:06:23.665 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:23.665 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.665 03:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4002859 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 4002859 ']' 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 4002859 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002859 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002859' 00:06:23.665 killing process with pid 4002859 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 4002859 00:06:23.665 03:00:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 4002859 00:06:24.236 00:06:24.236 real 0m1.751s 00:06:24.236 user 0m2.043s 00:06:24.236 sys 0m0.454s 00:06:24.236 03:00:55 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.236 03:00:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.237 ************************************ 00:06:24.237 END TEST dpdk_mem_utility 00:06:24.237 ************************************ 00:06:24.237 03:00:55 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:24.237 03:00:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.237 03:00:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.237 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.237 ************************************ 00:06:24.237 START TEST event 00:06:24.237 ************************************ 00:06:24.237 03:00:55 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:24.237 * Looking for test storage... 00:06:24.237 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:24.237 03:00:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:24.237 03:00:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.237 03:00:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.237 03:00:55 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:24.237 03:00:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.237 03:00:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.237 ************************************ 00:06:24.237 START TEST event_perf 00:06:24.237 ************************************ 00:06:24.237 03:00:55 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.237 Running I/O for 1 seconds...[2024-05-15 03:00:55.380962] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:24.237 [2024-05-15 03:00:55.381016] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003338 ] 00:06:24.495 [2024-05-15 03:00:55.480465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.495 [2024-05-15 03:00:55.576067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.495 [2024-05-15 03:00:55.576089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.495 [2024-05-15 03:00:55.576182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.495 [2024-05-15 03:00:55.576183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.870 Running I/O for 1 seconds... 00:06:25.870 lcore 0: 161875 00:06:25.870 lcore 1: 161872 00:06:25.870 lcore 2: 161872 00:06:25.870 lcore 3: 161874 00:06:25.870 done. 00:06:25.870 00:06:25.870 real 0m1.323s 00:06:25.870 user 0m4.209s 00:06:25.870 sys 0m0.108s 00:06:25.870 03:00:56 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.870 03:00:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.870 ************************************ 00:06:25.870 END TEST event_perf 00:06:25.870 ************************************ 00:06:25.870 03:00:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.870 03:00:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:25.870 03:00:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.870 03:00:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.870 ************************************ 00:06:25.870 START TEST event_reactor 00:06:25.870 ************************************ 00:06:25.870 03:00:56 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.870 [2024-05-15 03:00:56.777737] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:25.870 [2024-05-15 03:00:56.777790] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003590 ] 00:06:25.871 [2024-05-15 03:00:56.876180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.871 [2024-05-15 03:00:56.962970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.252 test_start 00:06:27.252 oneshot 00:06:27.252 tick 100 00:06:27.252 tick 100 00:06:27.252 tick 250 00:06:27.252 tick 100 00:06:27.252 tick 100 00:06:27.252 tick 100 00:06:27.252 tick 250 00:06:27.252 tick 500 00:06:27.252 tick 100 00:06:27.252 tick 100 00:06:27.252 tick 250 00:06:27.252 tick 100 00:06:27.252 tick 100 00:06:27.252 test_end 00:06:27.252 00:06:27.252 real 0m1.316s 00:06:27.252 user 0m1.213s 00:06:27.252 sys 0m0.097s 00:06:27.252 03:00:58 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.252 03:00:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:27.252 ************************************ 00:06:27.252 END TEST event_reactor 00:06:27.252 ************************************ 00:06:27.252 03:00:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.252 03:00:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:27.252 03:00:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.252 03:00:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.252 ************************************ 00:06:27.252 START TEST event_reactor_perf 00:06:27.252 ************************************ 00:06:27.252 03:00:58 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.252 [2024-05-15 03:00:58.165632] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:27.252 [2024-05-15 03:00:58.165686] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003833 ] 00:06:27.252 [2024-05-15 03:00:58.265025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.252 [2024-05-15 03:00:58.352870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.628 test_start 00:06:28.628 test_end 00:06:28.628 Performance: 296102 events per second 00:06:28.628 00:06:28.628 real 0m1.316s 00:06:28.628 user 0m1.205s 00:06:28.628 sys 0m0.105s 00:06:28.628 03:00:59 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.628 03:00:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 ************************************ 00:06:28.628 END TEST event_reactor_perf 00:06:28.628 ************************************ 00:06:28.628 03:00:59 event -- event/event.sh@49 -- # uname -s 00:06:28.628 03:00:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:28.628 03:00:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:28.628 03:00:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.628 03:00:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.628 03:00:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 ************************************ 00:06:28.628 START TEST event_scheduler 00:06:28.628 ************************************ 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:28.628 * Looking for test storage... 00:06:28.628 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:28.628 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:28.628 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4004110 00:06:28.628 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.628 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:28.628 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4004110 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 4004110 ']' 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.628 03:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 [2024-05-15 03:00:59.682162] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:28.628 [2024-05-15 03:00:59.682221] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004110 ] 00:06:28.628 [2024-05-15 03:00:59.754584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.887 [2024-05-15 03:00:59.833295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.887 [2024-05-15 03:00:59.833387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.887 [2024-05-15 03:00:59.833493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.887 [2024-05-15 03:00:59.833495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:28.887 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 POWER: Env isn't set yet! 00:06:28.887 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:28.887 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.887 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.887 POWER: Attempting to initialise PSTAT power management... 00:06:28.887 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:28.887 POWER: Initialized successfully for lcore 0 power management 00:06:28.887 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:28.887 POWER: Initialized successfully for lcore 1 power management 00:06:28.887 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:28.887 POWER: Initialized successfully for lcore 2 power management 00:06:28.887 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:28.887 POWER: Initialized successfully for lcore 3 power management 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.887 03:00:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.887 03:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 [2024-05-15 03:01:00.027449] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.887 03:01:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.887 03:01:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.887 03:01:00 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.887 03:01:00 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.887 03:01:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.145 ************************************ 00:06:29.145 START TEST scheduler_create_thread 00:06:29.145 ************************************ 00:06:29.145 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:29.145 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:29.145 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.145 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.145 2 00:06:29.145 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 3 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 4 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 5 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 6 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 7 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 8 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 9 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 10 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.146 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.714 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.714 03:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:29.714 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.714 03:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.090 03:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.090 03:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:31.090 03:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:31.090 03:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.090 03:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 03:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.021 00:06:32.021 real 0m3.101s 00:06:32.021 user 0m0.023s 00:06:32.021 sys 0m0.005s 00:06:32.021 03:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.021 03:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 ************************************ 00:06:32.021 END TEST scheduler_create_thread 00:06:32.021 ************************************ 00:06:32.279 03:01:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:32.279 03:01:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4004110 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 4004110 ']' 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 4004110 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4004110 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4004110' 00:06:32.279 killing process with pid 4004110 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 4004110 00:06:32.279 03:01:03 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 4004110 00:06:32.536 [2024-05-15 03:01:03.546836] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:32.536 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:32.536 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:32.536 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:32.536 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:32.536 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:32.536 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:32.536 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:32.536 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:32.795 00:06:32.795 real 0m4.247s 00:06:32.795 user 0m6.905s 00:06:32.795 sys 0m0.358s 00:06:32.795 03:01:03 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.795 03:01:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.795 ************************************ 00:06:32.795 END TEST event_scheduler 00:06:32.795 ************************************ 00:06:32.795 03:01:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:32.795 03:01:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:32.795 03:01:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.795 03:01:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.795 03:01:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.795 ************************************ 00:06:32.795 START TEST app_repeat 00:06:32.795 ************************************ 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4004839 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4004839' 00:06:32.795 Process app_repeat pid: 4004839 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:32.795 spdk_app_start Round 0 00:06:32.795 03:01:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004839 /var/tmp/spdk-nbd.sock 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004839 ']' 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.795 03:01:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.795 [2024-05-15 03:01:03.899215] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:32.795 [2024-05-15 03:01:03.899269] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004839 ] 00:06:33.054 [2024-05-15 03:01:03.998550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.054 [2024-05-15 03:01:04.088182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.054 [2024-05-15 03:01:04.088187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.054 03:01:04 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.054 03:01:04 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:33.054 03:01:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.312 Malloc0 00:06:33.312 03:01:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.580 Malloc1 00:06:33.580 03:01:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.580 03:01:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.839 /dev/nbd0 00:06:33.839 03:01:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.839 03:01:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.839 1+0 records in 00:06:33.839 1+0 records out 00:06:33.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188068 s, 21.8 MB/s 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.839 03:01:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.839 03:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.839 03:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.839 03:01:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.098 /dev/nbd1 00:06:34.098 03:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.098 03:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:34.098 03:01:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.099 1+0 records in 00:06:34.099 1+0 records out 00:06:34.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183997 s, 22.3 MB/s 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:34.099 03:01:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:34.099 03:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.099 03:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.099 03:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.099 03:01:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.099 03:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.357 03:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.357 { 00:06:34.357 "nbd_device": "/dev/nbd0", 00:06:34.357 "bdev_name": "Malloc0" 00:06:34.357 }, 00:06:34.357 { 00:06:34.357 "nbd_device": "/dev/nbd1", 00:06:34.357 "bdev_name": "Malloc1" 00:06:34.357 } 00:06:34.357 ]' 00:06:34.357 03:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.357 { 00:06:34.357 "nbd_device": "/dev/nbd0", 00:06:34.357 "bdev_name": "Malloc0" 00:06:34.357 }, 00:06:34.357 { 00:06:34.357 "nbd_device": "/dev/nbd1", 00:06:34.358 "bdev_name": "Malloc1" 00:06:34.358 } 00:06:34.358 ]' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.358 /dev/nbd1' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.358 /dev/nbd1' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.358 03:01:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.616 256+0 records in 00:06:34.616 256+0 records out 00:06:34.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0097527 s, 108 MB/s 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.616 256+0 records in 00:06:34.616 256+0 records out 00:06:34.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200872 s, 52.2 MB/s 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.616 256+0 records in 00:06:34.616 256+0 records out 00:06:34.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217472 s, 48.2 MB/s 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.616 03:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.617 03:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.876 03:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.134 03:01:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.135 03:01:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.135 03:01:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.135 03:01:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.701 03:01:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.701 [2024-05-15 03:01:06.811702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.960 [2024-05-15 03:01:06.898092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.960 [2024-05-15 03:01:06.898097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.960 [2024-05-15 03:01:06.944544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.960 [2024-05-15 03:01:06.944590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.516 03:01:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.516 03:01:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.516 spdk_app_start Round 1 00:06:38.516 03:01:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004839 /var/tmp/spdk-nbd.sock 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004839 ']' 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.516 03:01:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.774 03:01:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.774 03:01:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:38.774 03:01:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.033 Malloc0 00:06:39.033 03:01:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.291 Malloc1 00:06:39.291 03:01:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.291 03:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.549 /dev/nbd0 00:06:39.549 03:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.549 03:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.549 1+0 records in 00:06:39.549 1+0 records out 00:06:39.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182386 s, 22.5 MB/s 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.549 03:01:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:39.549 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.549 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.549 03:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.808 /dev/nbd1 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.808 1+0 records in 00:06:39.808 1+0 records out 00:06:39.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208759 s, 19.6 MB/s 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.808 03:01:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.808 03:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.066 03:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.066 { 00:06:40.066 "nbd_device": "/dev/nbd0", 00:06:40.066 "bdev_name": "Malloc0" 00:06:40.066 }, 00:06:40.066 { 00:06:40.066 "nbd_device": "/dev/nbd1", 00:06:40.066 "bdev_name": "Malloc1" 00:06:40.066 } 00:06:40.066 ]' 00:06:40.066 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.066 { 00:06:40.066 "nbd_device": "/dev/nbd0", 00:06:40.066 "bdev_name": "Malloc0" 00:06:40.066 }, 00:06:40.067 { 00:06:40.067 "nbd_device": "/dev/nbd1", 00:06:40.067 "bdev_name": "Malloc1" 00:06:40.067 } 00:06:40.067 ]' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.067 /dev/nbd1' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.067 /dev/nbd1' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.067 256+0 records in 00:06:40.067 256+0 records out 00:06:40.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00978068 s, 107 MB/s 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.067 256+0 records in 00:06:40.067 256+0 records out 00:06:40.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208497 s, 50.3 MB/s 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.067 256+0 records in 00:06:40.067 256+0 records out 00:06:40.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216323 s, 48.5 MB/s 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.067 03:01:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.326 03:01:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.585 03:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.845 03:01:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.845 03:01:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.103 03:01:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.362 [2024-05-15 03:01:12.443298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.637 [2024-05-15 03:01:12.530590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.637 [2024-05-15 03:01:12.530595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.637 [2024-05-15 03:01:12.578092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.638 [2024-05-15 03:01:12.578137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.192 03:01:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.192 03:01:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:44.192 spdk_app_start Round 2 00:06:44.192 03:01:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004839 /var/tmp/spdk-nbd.sock 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004839 ']' 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.192 03:01:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.456 03:01:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.456 03:01:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:44.456 03:01:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.716 Malloc0 00:06:44.716 03:01:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.975 Malloc1 00:06:44.975 03:01:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.975 03:01:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.975 /dev/nbd0 00:06:44.975 03:01:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.975 03:01:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.975 1+0 records in 00:06:44.975 1+0 records out 00:06:44.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189973 s, 21.6 MB/s 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:44.975 03:01:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:44.975 03:01:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.975 03:01:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.975 03:01:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.234 /dev/nbd1 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.492 1+0 records in 00:06:45.492 1+0 records out 00:06:45.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224473 s, 18.2 MB/s 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:45.492 03:01:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.492 03:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.750 03:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.750 { 00:06:45.750 "nbd_device": "/dev/nbd0", 00:06:45.750 "bdev_name": "Malloc0" 00:06:45.750 }, 00:06:45.750 { 00:06:45.750 "nbd_device": "/dev/nbd1", 00:06:45.750 "bdev_name": "Malloc1" 00:06:45.750 } 00:06:45.750 ]' 00:06:45.750 03:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.750 { 00:06:45.750 "nbd_device": "/dev/nbd0", 00:06:45.750 "bdev_name": "Malloc0" 00:06:45.750 }, 00:06:45.750 { 00:06:45.750 "nbd_device": "/dev/nbd1", 00:06:45.751 "bdev_name": "Malloc1" 00:06:45.751 } 00:06:45.751 ]' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.751 /dev/nbd1' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.751 /dev/nbd1' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.751 256+0 records in 00:06:45.751 256+0 records out 00:06:45.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972394 s, 108 MB/s 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.751 256+0 records in 00:06:45.751 256+0 records out 00:06:45.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202782 s, 51.7 MB/s 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.751 256+0 records in 00:06:45.751 256+0 records out 00:06:45.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021773 s, 48.2 MB/s 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.751 03:01:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.009 03:01:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.267 03:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.525 03:01:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.525 03:01:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.792 03:01:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.054 [2024-05-15 03:01:18.190455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.312 [2024-05-15 03:01:18.277196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.312 [2024-05-15 03:01:18.277201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.312 [2024-05-15 03:01:18.323884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.312 [2024-05-15 03:01:18.323930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.861 03:01:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4004839 /var/tmp/spdk-nbd.sock 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004839 ']' 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.861 03:01:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:50.121 03:01:21 event.app_repeat -- event/event.sh@39 -- # killprocess 4004839 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 4004839 ']' 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 4004839 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4004839 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4004839' 00:06:50.121 killing process with pid 4004839 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@965 -- # kill 4004839 00:06:50.121 03:01:21 event.app_repeat -- common/autotest_common.sh@970 -- # wait 4004839 00:06:50.380 spdk_app_start is called in Round 0. 00:06:50.380 Shutdown signal received, stop current app iteration 00:06:50.380 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:06:50.380 spdk_app_start is called in Round 1. 00:06:50.380 Shutdown signal received, stop current app iteration 00:06:50.380 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:06:50.380 spdk_app_start is called in Round 2. 00:06:50.380 Shutdown signal received, stop current app iteration 00:06:50.380 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:06:50.380 spdk_app_start is called in Round 3. 00:06:50.380 Shutdown signal received, stop current app iteration 00:06:50.380 03:01:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.380 03:01:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.380 00:06:50.380 real 0m17.586s 00:06:50.380 user 0m38.758s 00:06:50.380 sys 0m2.795s 00:06:50.380 03:01:21 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.380 03:01:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.380 ************************************ 00:06:50.380 END TEST app_repeat 00:06:50.380 ************************************ 00:06:50.380 03:01:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.380 00:06:50.380 real 0m26.243s 00:06:50.380 user 0m52.448s 00:06:50.380 sys 0m3.776s 00:06:50.380 03:01:21 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.380 03:01:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.380 ************************************ 00:06:50.380 END TEST event 00:06:50.380 ************************************ 00:06:50.380 03:01:21 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:50.380 03:01:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.380 03:01:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.380 03:01:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.639 ************************************ 00:06:50.639 START TEST thread 00:06:50.639 ************************************ 00:06:50.639 03:01:21 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:50.639 * Looking for test storage... 00:06:50.639 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:50.639 03:01:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.639 03:01:21 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:50.639 03:01:21 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.639 03:01:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.639 ************************************ 00:06:50.639 START TEST thread_poller_perf 00:06:50.639 ************************************ 00:06:50.639 03:01:21 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.639 [2024-05-15 03:01:21.709599] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:50.639 [2024-05-15 03:01:21.709653] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008027 ] 00:06:50.897 [2024-05-15 03:01:21.808946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.897 [2024-05-15 03:01:21.900539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.897 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.273 ====================================== 00:06:52.273 busy:2107424426 (cyc) 00:06:52.273 total_run_count: 242000 00:06:52.273 tsc_hz: 2100000000 (cyc) 00:06:52.273 ====================================== 00:06:52.273 poller_cost: 8708 (cyc), 4146 (nsec) 00:06:52.273 00:06:52.273 real 0m1.329s 00:06:52.273 user 0m1.214s 00:06:52.273 sys 0m0.109s 00:06:52.273 03:01:23 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.273 03:01:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.273 ************************************ 00:06:52.273 END TEST thread_poller_perf 00:06:52.273 ************************************ 00:06:52.273 03:01:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.273 03:01:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.273 03:01:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.273 03:01:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.273 ************************************ 00:06:52.273 START TEST thread_poller_perf 00:06:52.273 ************************************ 00:06:52.273 03:01:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.273 [2024-05-15 03:01:23.110897] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:52.273 [2024-05-15 03:01:23.110963] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008273 ] 00:06:52.273 [2024-05-15 03:01:23.207187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.273 [2024-05-15 03:01:23.296734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.273 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.660 ====================================== 00:06:53.660 busy:2102427546 (cyc) 00:06:53.660 total_run_count: 3187000 00:06:53.660 tsc_hz: 2100000000 (cyc) 00:06:53.660 ====================================== 00:06:53.660 poller_cost: 659 (cyc), 313 (nsec) 00:06:53.660 00:06:53.660 real 0m1.318s 00:06:53.660 user 0m1.214s 00:06:53.660 sys 0m0.099s 00:06:53.660 03:01:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.660 03:01:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.660 ************************************ 00:06:53.660 END TEST thread_poller_perf 00:06:53.660 ************************************ 00:06:53.660 03:01:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.660 00:06:53.660 real 0m2.880s 00:06:53.660 user 0m2.522s 00:06:53.660 sys 0m0.355s 00:06:53.660 03:01:24 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.660 03:01:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.660 ************************************ 00:06:53.660 END TEST thread 00:06:53.660 ************************************ 00:06:53.660 03:01:24 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:53.660 03:01:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.660 03:01:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.660 03:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:53.660 ************************************ 00:06:53.660 START TEST accel 00:06:53.660 ************************************ 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:53.660 * Looking for test storage... 00:06:53.660 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:53.660 03:01:24 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:53.660 03:01:24 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:53.660 03:01:24 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.660 03:01:24 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4008557 00:06:53.660 03:01:24 accel -- accel/accel.sh@63 -- # waitforlisten 4008557 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@827 -- # '[' -z 4008557 ']' 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.660 03:01:24 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.660 03:01:24 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.660 03:01:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.660 03:01:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.660 03:01:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.660 03:01:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.661 03:01:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.661 03:01:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.661 03:01:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.661 03:01:24 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.661 [2024-05-15 03:01:24.656083] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:53.661 [2024-05-15 03:01:24.656146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008557 ] 00:06:53.661 [2024-05-15 03:01:24.754409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.926 [2024-05-15 03:01:24.852398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.496 03:01:25 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.496 03:01:25 accel -- common/autotest_common.sh@860 -- # return 0 00:06:54.496 03:01:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:54.496 03:01:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:54.496 03:01:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:54.496 03:01:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:54.496 03:01:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.496 03:01:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:54.497 03:01:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.497 03:01:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.497 03:01:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.497 03:01:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.497 03:01:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.497 03:01:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.497 03:01:25 accel -- accel/accel.sh@75 -- # killprocess 4008557 00:06:54.497 03:01:25 accel -- common/autotest_common.sh@946 -- # '[' -z 4008557 ']' 00:06:54.497 03:01:25 accel -- common/autotest_common.sh@950 -- # kill -0 4008557 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@951 -- # uname 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4008557 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4008557' 00:06:54.811 killing process with pid 4008557 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@965 -- # kill 4008557 00:06:54.811 03:01:25 accel -- common/autotest_common.sh@970 -- # wait 4008557 00:06:55.098 03:01:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:55.098 03:01:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 03:01:26 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:55.098 03:01:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:55.098 03:01:26 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.098 03:01:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 03:01:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.098 03:01:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 ************************************ 00:06:55.098 START TEST accel_missing_filename 00:06:55.098 ************************************ 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.098 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:55.098 03:01:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:55.098 [2024-05-15 03:01:26.240949] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:55.098 [2024-05-15 03:01:26.241002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008830 ] 00:06:55.357 [2024-05-15 03:01:26.339811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.357 [2024-05-15 03:01:26.430267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.357 [2024-05-15 03:01:26.489902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.618 [2024-05-15 03:01:26.553346] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:55.618 A filename is required. 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.618 00:06:55.618 real 0m0.454s 00:06:55.618 user 0m0.306s 00:06:55.618 sys 0m0.158s 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.618 03:01:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:55.618 ************************************ 00:06:55.618 END TEST accel_missing_filename 00:06:55.618 ************************************ 00:06:55.618 03:01:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.618 03:01:26 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:55.618 03:01:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.618 03:01:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.618 ************************************ 00:06:55.618 START TEST accel_compress_verify 00:06:55.618 ************************************ 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.618 03:01:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:55.618 03:01:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:55.618 [2024-05-15 03:01:26.766777] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:55.618 [2024-05-15 03:01:26.766829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009072 ] 00:06:55.877 [2024-05-15 03:01:26.866250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.877 [2024-05-15 03:01:26.957398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.877 [2024-05-15 03:01:27.017275] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.137 [2024-05-15 03:01:27.081781] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:56.137 00:06:56.137 Compression does not support the verify option, aborting. 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.137 00:06:56.137 real 0m0.458s 00:06:56.137 user 0m0.340s 00:06:56.137 sys 0m0.152s 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.137 03:01:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.137 ************************************ 00:06:56.137 END TEST accel_compress_verify 00:06:56.137 ************************************ 00:06:56.137 03:01:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:56.137 03:01:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.137 03:01:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.137 03:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.137 ************************************ 00:06:56.137 START TEST accel_wrong_workload 00:06:56.137 ************************************ 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.137 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:56.137 03:01:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:56.137 Unsupported workload type: foobar 00:06:56.137 [2024-05-15 03:01:27.292864] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:56.396 accel_perf options: 00:06:56.396 [-h help message] 00:06:56.396 [-q queue depth per core] 00:06:56.396 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.396 [-T number of threads per core 00:06:56.396 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.397 [-t time in seconds] 00:06:56.397 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.397 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.397 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.397 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.397 [-S for crc32c workload, use this seed value (default 0) 00:06:56.397 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.397 [-f for fill workload, use this BYTE value (default 255) 00:06:56.397 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.397 [-y verify result if this switch is on] 00:06:56.397 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.397 Can be used to spread operations across a wider range of memory. 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.397 00:06:56.397 real 0m0.037s 00:06:56.397 user 0m0.022s 00:06:56.397 sys 0m0.015s 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.397 03:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:56.397 ************************************ 00:06:56.397 END TEST accel_wrong_workload 00:06:56.397 ************************************ 00:06:56.397 Error: writing output failed: Broken pipe 00:06:56.397 03:01:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.397 ************************************ 00:06:56.397 START TEST accel_negative_buffers 00:06:56.397 ************************************ 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:56.397 03:01:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:56.397 -x option must be non-negative. 00:06:56.397 [2024-05-15 03:01:27.403100] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:56.397 accel_perf options: 00:06:56.397 [-h help message] 00:06:56.397 [-q queue depth per core] 00:06:56.397 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.397 [-T number of threads per core 00:06:56.397 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.397 [-t time in seconds] 00:06:56.397 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.397 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.397 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.397 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.397 [-S for crc32c workload, use this seed value (default 0) 00:06:56.397 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.397 [-f for fill workload, use this BYTE value (default 255) 00:06:56.397 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.397 [-y verify result if this switch is on] 00:06:56.397 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.397 Can be used to spread operations across a wider range of memory. 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.397 00:06:56.397 real 0m0.037s 00:06:56.397 user 0m0.023s 00:06:56.397 sys 0m0.014s 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.397 03:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:56.397 ************************************ 00:06:56.397 END TEST accel_negative_buffers 00:06:56.397 ************************************ 00:06:56.397 Error: writing output failed: Broken pipe 00:06:56.397 03:01:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.397 03:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.397 ************************************ 00:06:56.397 START TEST accel_crc32c 00:06:56.397 ************************************ 00:06:56.397 03:01:27 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:56.397 03:01:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:56.397 [2024-05-15 03:01:27.512679] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:56.397 [2024-05-15 03:01:27.512733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009134 ] 00:06:56.656 [2024-05-15 03:01:27.613863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.656 [2024-05-15 03:01:27.709396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 03:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.033 03:01:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.033 00:06:58.033 real 0m1.473s 00:06:58.033 user 0m1.317s 00:06:58.033 sys 0m0.157s 00:06:58.033 03:01:28 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.033 03:01:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.033 ************************************ 00:06:58.033 END TEST accel_crc32c 00:06:58.033 ************************************ 00:06:58.033 03:01:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.033 03:01:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:58.033 03:01:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.033 03:01:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.033 ************************************ 00:06:58.033 START TEST accel_crc32c_C2 00:06:58.033 ************************************ 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.033 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.033 [2024-05-15 03:01:29.058416] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:58.033 [2024-05-15 03:01:29.058467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009387 ] 00:06:58.033 [2024-05-15 03:01:29.145715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.291 [2024-05-15 03:01:29.240455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.291 03:01:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.666 00:06:59.666 real 0m1.453s 00:06:59.666 user 0m1.314s 00:06:59.666 sys 0m0.144s 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.666 03:01:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.666 ************************************ 00:06:59.666 END TEST accel_crc32c_C2 00:06:59.666 ************************************ 00:06:59.666 03:01:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.666 03:01:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.666 03:01:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.666 03:01:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.666 ************************************ 00:06:59.666 START TEST accel_copy 00:06:59.666 ************************************ 00:06:59.666 03:01:30 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.666 03:01:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.666 [2024-05-15 03:01:30.583347] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:59.667 [2024-05-15 03:01:30.583397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009710 ] 00:06:59.667 [2024-05-15 03:01:30.671156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.667 [2024-05-15 03:01:30.762052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.667 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.926 03:01:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:00.861 03:01:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.861 00:07:00.861 real 0m1.450s 00:07:00.861 user 0m1.310s 00:07:00.861 sys 0m0.145s 00:07:00.861 03:01:32 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.861 03:01:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.861 ************************************ 00:07:00.861 END TEST accel_copy 00:07:00.861 ************************************ 00:07:01.120 03:01:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.120 03:01:32 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.120 03:01:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.120 03:01:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.120 ************************************ 00:07:01.120 START TEST accel_fill 00:07:01.120 ************************************ 00:07:01.120 03:01:32 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:01.120 03:01:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:01.120 [2024-05-15 03:01:32.108500] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:01.120 [2024-05-15 03:01:32.108551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010047 ] 00:07:01.120 [2024-05-15 03:01:32.207070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.379 [2024-05-15 03:01:32.298212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.379 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 03:01:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:02.757 03:01:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.757 00:07:02.757 real 0m1.462s 00:07:02.757 user 0m1.314s 00:07:02.757 sys 0m0.154s 00:07:02.757 03:01:33 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.757 03:01:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:02.757 ************************************ 00:07:02.757 END TEST accel_fill 00:07:02.757 ************************************ 00:07:02.757 03:01:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.757 03:01:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.757 03:01:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.757 03:01:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.757 ************************************ 00:07:02.757 START TEST accel_copy_crc32c 00:07:02.757 ************************************ 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:02.757 [2024-05-15 03:01:33.642533] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:02.757 [2024-05-15 03:01:33.642587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010337 ] 00:07:02.757 [2024-05-15 03:01:33.740348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.757 [2024-05-15 03:01:33.830580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.757 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.015 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.015 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.015 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.016 03:01:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.950 00:07:03.950 real 0m1.475s 00:07:03.950 user 0m1.313s 00:07:03.950 sys 0m0.154s 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.950 03:01:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.950 ************************************ 00:07:03.950 END TEST accel_copy_crc32c 00:07:03.950 ************************************ 00:07:04.208 03:01:35 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.208 03:01:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:04.208 03:01:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.208 03:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.208 ************************************ 00:07:04.208 START TEST accel_copy_crc32c_C2 00:07:04.208 ************************************ 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.208 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:04.208 [2024-05-15 03:01:35.190152] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:04.208 [2024-05-15 03:01:35.190204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010587 ] 00:07:04.208 [2024-05-15 03:01:35.289191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.467 [2024-05-15 03:01:35.379883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.467 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.468 03:01:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.843 00:07:05.843 real 0m1.469s 00:07:05.843 user 0m1.317s 00:07:05.843 sys 0m0.151s 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.843 03:01:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.843 ************************************ 00:07:05.843 END TEST accel_copy_crc32c_C2 00:07:05.843 ************************************ 00:07:05.843 03:01:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.843 03:01:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.843 03:01:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.843 03:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.843 ************************************ 00:07:05.843 START TEST accel_dualcast 00:07:05.843 ************************************ 00:07:05.843 03:01:36 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.843 03:01:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:05.844 [2024-05-15 03:01:36.728555] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:05.844 [2024-05-15 03:01:36.728608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010832 ] 00:07:05.844 [2024-05-15 03:01:36.826681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.844 [2024-05-15 03:01:36.917208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.844 03:01:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.232 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.232 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:07.233 03:01:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.233 00:07:07.233 real 0m1.457s 00:07:07.233 user 0m1.309s 00:07:07.233 sys 0m0.152s 00:07:07.233 03:01:38 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.233 03:01:38 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 END TEST accel_dualcast 00:07:07.233 ************************************ 00:07:07.233 03:01:38 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:07.233 03:01:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:07.233 03:01:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.233 03:01:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 START TEST accel_compare 00:07:07.233 ************************************ 00:07:07.233 03:01:38 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:07.233 03:01:38 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:07.233 [2024-05-15 03:01:38.255332] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:07.233 [2024-05-15 03:01:38.255385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011082 ] 00:07:07.233 [2024-05-15 03:01:38.353099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.498 [2024-05-15 03:01:38.443544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.498 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.499 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.500 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.501 03:01:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:08.878 03:01:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.878 00:07:08.878 real 0m1.459s 00:07:08.878 user 0m1.304s 00:07:08.878 sys 0m0.160s 00:07:08.878 03:01:39 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.878 03:01:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:08.878 ************************************ 00:07:08.878 END TEST accel_compare 00:07:08.878 ************************************ 00:07:08.878 03:01:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.878 03:01:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:08.878 03:01:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.878 03:01:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.878 ************************************ 00:07:08.878 START TEST accel_xor 00:07:08.878 ************************************ 00:07:08.878 03:01:39 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:08.878 03:01:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:08.878 [2024-05-15 03:01:39.780589] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:08.878 [2024-05-15 03:01:39.780643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011327 ] 00:07:08.878 [2024-05-15 03:01:39.878007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.878 [2024-05-15 03:01:39.969104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.878 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.136 03:01:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.075 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.075 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.075 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.075 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:10.076 03:01:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.076 00:07:10.076 real 0m1.452s 00:07:10.076 user 0m1.306s 00:07:10.076 sys 0m0.150s 00:07:10.076 03:01:41 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.076 03:01:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:10.076 ************************************ 00:07:10.076 END TEST accel_xor 00:07:10.076 ************************************ 00:07:10.334 03:01:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:10.335 03:01:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:10.335 03:01:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.335 03:01:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.335 ************************************ 00:07:10.335 START TEST accel_xor 00:07:10.335 ************************************ 00:07:10.335 03:01:41 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.335 03:01:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.335 [2024-05-15 03:01:41.305596] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:10.335 [2024-05-15 03:01:41.305647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011577 ] 00:07:10.335 [2024-05-15 03:01:41.406430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.592 [2024-05-15 03:01:41.495889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.592 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.592 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.592 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.593 03:01:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.001 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.002 03:01:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.002 00:07:12.002 real 0m1.466s 00:07:12.002 user 0m1.312s 00:07:12.002 sys 0m0.154s 00:07:12.002 03:01:42 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.002 03:01:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.002 ************************************ 00:07:12.002 END TEST accel_xor 00:07:12.002 ************************************ 00:07:12.002 03:01:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.002 03:01:42 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:12.002 03:01:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.002 03:01:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.002 ************************************ 00:07:12.002 START TEST accel_dif_verify 00:07:12.002 ************************************ 00:07:12.002 03:01:42 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:12.002 03:01:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:12.002 [2024-05-15 03:01:42.847759] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:12.002 [2024-05-15 03:01:42.847815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011863 ] 00:07:12.002 [2024-05-15 03:01:42.946067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.002 [2024-05-15 03:01:43.041963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.002 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.003 03:01:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:13.377 03:01:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.377 00:07:13.377 real 0m1.462s 00:07:13.377 user 0m1.321s 00:07:13.377 sys 0m0.149s 00:07:13.377 03:01:44 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.377 03:01:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.377 ************************************ 00:07:13.377 END TEST accel_dif_verify 00:07:13.377 ************************************ 00:07:13.377 03:01:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:13.377 03:01:44 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:13.377 03:01:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.377 03:01:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.377 ************************************ 00:07:13.377 START TEST accel_dif_generate 00:07:13.377 ************************************ 00:07:13.377 03:01:44 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:13.377 03:01:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:13.377 [2024-05-15 03:01:44.378583] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:13.377 [2024-05-15 03:01:44.378634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012211 ] 00:07:13.377 [2024-05-15 03:01:44.476209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.636 [2024-05-15 03:01:44.566871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.636 03:01:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:15.009 03:01:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.009 00:07:15.009 real 0m1.468s 00:07:15.009 user 0m1.323s 00:07:15.009 sys 0m0.148s 00:07:15.009 03:01:45 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.009 03:01:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:15.009 ************************************ 00:07:15.009 END TEST accel_dif_generate 00:07:15.009 ************************************ 00:07:15.009 03:01:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.009 03:01:45 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:15.009 03:01:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.009 03:01:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.009 ************************************ 00:07:15.009 START TEST accel_dif_generate_copy 00:07:15.009 ************************************ 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.009 03:01:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.009 [2024-05-15 03:01:45.907117] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:15.009 [2024-05-15 03:01:45.907173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012528 ] 00:07:15.009 [2024-05-15 03:01:46.002523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.009 [2024-05-15 03:01:46.091921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.009 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.010 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.269 03:01:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.203 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.204 00:07:16.204 real 0m1.458s 00:07:16.204 user 0m1.324s 00:07:16.204 sys 0m0.139s 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.204 03:01:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.204 ************************************ 00:07:16.204 END TEST accel_dif_generate_copy 00:07:16.204 ************************************ 00:07:16.462 03:01:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:16.462 03:01:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.462 03:01:47 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.462 03:01:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.462 03:01:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.462 ************************************ 00:07:16.462 START TEST accel_comp 00:07:16.462 ************************************ 00:07:16.462 03:01:47 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.462 03:01:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:16.462 [2024-05-15 03:01:47.438414] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:16.462 [2024-05-15 03:01:47.438464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012781 ] 00:07:16.462 [2024-05-15 03:01:47.536353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.720 [2024-05-15 03:01:47.626723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.720 03:01:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:18.092 03:01:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.092 00:07:18.092 real 0m1.460s 00:07:18.092 user 0m1.308s 00:07:18.092 sys 0m0.158s 00:07:18.092 03:01:48 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.092 03:01:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:18.092 ************************************ 00:07:18.092 END TEST accel_comp 00:07:18.092 ************************************ 00:07:18.092 03:01:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:18.092 03:01:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:18.092 03:01:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.093 03:01:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.093 ************************************ 00:07:18.093 START TEST accel_decomp 00:07:18.093 ************************************ 00:07:18.093 03:01:48 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.093 03:01:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:18.093 [2024-05-15 03:01:48.962507] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:18.093 [2024-05-15 03:01:48.962559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013024 ] 00:07:18.093 [2024-05-15 03:01:49.059989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.093 [2024-05-15 03:01:49.149795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.093 03:01:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.465 03:01:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.465 00:07:19.465 real 0m1.450s 00:07:19.465 user 0m1.302s 00:07:19.465 sys 0m0.155s 00:07:19.465 03:01:50 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.465 03:01:50 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:19.465 ************************************ 00:07:19.465 END TEST accel_decomp 00:07:19.465 ************************************ 00:07:19.465 03:01:50 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.465 03:01:50 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:19.465 03:01:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.465 03:01:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.465 ************************************ 00:07:19.465 START TEST accel_decmop_full 00:07:19.465 ************************************ 00:07:19.465 03:01:50 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:19.465 03:01:50 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:19.465 [2024-05-15 03:01:50.492306] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:19.465 [2024-05-15 03:01:50.492361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013276 ] 00:07:19.465 [2024-05-15 03:01:50.592040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.724 [2024-05-15 03:01:50.683304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.724 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.725 03:01:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.098 03:01:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.098 00:07:21.098 real 0m1.483s 00:07:21.098 user 0m1.341s 00:07:21.098 sys 0m0.147s 00:07:21.098 03:01:51 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.098 03:01:51 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:21.098 ************************************ 00:07:21.098 END TEST accel_decmop_full 00:07:21.098 ************************************ 00:07:21.098 03:01:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.098 03:01:51 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:21.098 03:01:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.098 03:01:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.098 ************************************ 00:07:21.098 START TEST accel_decomp_mcore 00:07:21.098 ************************************ 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.098 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.098 [2024-05-15 03:01:52.045798] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:21.098 [2024-05-15 03:01:52.045855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013521 ] 00:07:21.098 [2024-05-15 03:01:52.141244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.098 [2024-05-15 03:01:52.235136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.098 [2024-05-15 03:01:52.235232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.098 [2024-05-15 03:01:52.235338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.098 [2024-05-15 03:01:52.235339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.357 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.358 03:01:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.735 00:07:22.735 real 0m1.474s 00:07:22.735 user 0m4.723s 00:07:22.735 sys 0m0.159s 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.735 03:01:53 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.735 ************************************ 00:07:22.735 END TEST accel_decomp_mcore 00:07:22.735 ************************************ 00:07:22.735 03:01:53 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.735 03:01:53 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:22.735 03:01:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.735 03:01:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.735 ************************************ 00:07:22.735 START TEST accel_decomp_full_mcore 00:07:22.735 ************************************ 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.735 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.735 [2024-05-15 03:01:53.581296] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:22.735 [2024-05-15 03:01:53.581347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013774 ] 00:07:22.735 [2024-05-15 03:01:53.678561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.735 [2024-05-15 03:01:53.773056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.735 [2024-05-15 03:01:53.773151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.735 [2024-05-15 03:01:53.773270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.736 [2024-05-15 03:01:53.773271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.736 03:01:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.109 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.110 00:07:24.110 real 0m1.487s 00:07:24.110 user 0m4.784s 00:07:24.110 sys 0m0.163s 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.110 03:01:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:24.110 ************************************ 00:07:24.110 END TEST accel_decomp_full_mcore 00:07:24.110 ************************************ 00:07:24.110 03:01:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.110 03:01:55 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:24.110 03:01:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.110 03:01:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.110 ************************************ 00:07:24.110 START TEST accel_decomp_mthread 00:07:24.110 ************************************ 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.110 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.110 [2024-05-15 03:01:55.128670] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:24.110 [2024-05-15 03:01:55.128705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014025 ] 00:07:24.110 [2024-05-15 03:01:55.213179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.369 [2024-05-15 03:01:55.304022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.369 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.370 03:01:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.748 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.749 00:07:25.749 real 0m1.446s 00:07:25.749 user 0m1.303s 00:07:25.749 sys 0m0.146s 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.749 03:01:56 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.749 ************************************ 00:07:25.749 END TEST accel_decomp_mthread 00:07:25.749 ************************************ 00:07:25.749 03:01:56 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.749 03:01:56 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.749 03:01:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.749 03:01:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.749 ************************************ 00:07:25.749 START TEST accel_decomp_full_mthread 00:07:25.749 ************************************ 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.749 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.749 [2024-05-15 03:01:56.664442] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:25.749 [2024-05-15 03:01:56.664499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014337 ] 00:07:25.749 [2024-05-15 03:01:56.766009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.749 [2024-05-15 03:01:56.857311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.007 03:01:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.385 03:01:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.385 00:07:27.385 real 0m1.526s 00:07:27.385 user 0m1.371s 00:07:27.385 sys 0m0.149s 00:07:27.386 03:01:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.386 03:01:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.386 ************************************ 00:07:27.386 END TEST accel_decomp_full_mthread 00:07:27.386 ************************************ 00:07:27.386 03:01:58 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:27.386 03:01:58 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:27.386 03:01:58 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:27.386 03:01:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.386 03:01:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4014676 00:07:27.386 03:01:58 accel -- accel/accel.sh@63 -- # waitforlisten 4014676 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@827 -- # '[' -z 4014676 ']' 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.386 03:01:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.386 03:01:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:27.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.386 03:01:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.386 03:01:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.386 03:01:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.386 03:01:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.386 03:01:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.386 03:01:58 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:27.386 03:01:58 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:27.386 03:01:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:27.386 03:01:58 accel -- accel/accel.sh@41 -- # jq -r . 00:07:27.386 [2024-05-15 03:01:58.256408] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:27.386 [2024-05-15 03:01:58.256468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014676 ] 00:07:27.386 [2024-05-15 03:01:58.353170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.386 [2024-05-15 03:01:58.448838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.953 [2024-05-15 03:01:59.006636] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:28.266 03:01:59 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.266 03:01:59 accel -- common/autotest_common.sh@860 -- # return 0 00:07:28.266 03:01:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:28.266 03:01:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:28.266 03:01:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:28.266 03:01:59 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:28.266 03:01:59 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:28.267 03:01:59 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 03:01:59 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:28.267 03:01:59 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.267 "method": "compressdev_scan_accel_module", 00:07:28.267 03:01:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:28.267 03:01:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:28.267 03:01:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:28.267 03:01:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:28.267 03:01:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.267 03:01:59 accel -- accel/accel.sh@75 -- # killprocess 4014676 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@946 -- # '[' -z 4014676 ']' 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@950 -- # kill -0 4014676 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@951 -- # uname 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.267 03:01:59 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4014676 00:07:28.526 03:01:59 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.526 03:01:59 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.526 03:01:59 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4014676' 00:07:28.526 killing process with pid 4014676 00:07:28.526 03:01:59 accel -- common/autotest_common.sh@965 -- # kill 4014676 00:07:28.526 03:01:59 accel -- common/autotest_common.sh@970 -- # wait 4014676 00:07:28.786 03:01:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:28.786 03:01:59 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.786 03:01:59 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:28.786 03:01:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.786 03:01:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.786 ************************************ 00:07:28.786 START TEST accel_cdev_comp 00:07:28.786 ************************************ 00:07:28.786 03:01:59 accel.accel_cdev_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:28.786 03:01:59 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:28.786 [2024-05-15 03:01:59.876799] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:28.786 [2024-05-15 03:01:59.876834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014981 ] 00:07:29.045 [2024-05-15 03:01:59.959897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.045 [2024-05-15 03:02:00.060445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.613 [2024-05-15 03:02:00.617722] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:29.613 [2024-05-15 03:02:00.620023] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xda36d0 PMD being used: compress_qat 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 [2024-05-15 03:02:00.623964] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xda8390 PMD being used: compress_qat 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.613 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.614 03:02:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:30.990 03:02:01 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:30.990 00:07:30.990 real 0m1.956s 00:07:30.990 user 0m1.584s 00:07:30.990 sys 0m0.372s 00:07:30.990 03:02:01 accel.accel_cdev_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.990 03:02:01 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:30.990 ************************************ 00:07:30.990 END TEST accel_cdev_comp 00:07:30.990 ************************************ 00:07:30.990 03:02:01 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:30.990 03:02:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:30.990 03:02:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.990 03:02:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.990 ************************************ 00:07:30.990 START TEST accel_cdev_decomp 00:07:30.990 ************************************ 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:30.990 03:02:01 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:30.990 [2024-05-15 03:02:01.922913] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:30.990 [2024-05-15 03:02:01.922967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015233 ] 00:07:30.990 [2024-05-15 03:02:02.024630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.990 [2024-05-15 03:02:02.114579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.560 [2024-05-15 03:02:02.681729] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:31.560 [2024-05-15 03:02:02.684042] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x9776d0 PMD being used: compress_qat 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 [2024-05-15 03:02:02.688128] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x97c390 PMD being used: compress_qat 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.560 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.561 03:02:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:32.936 00:07:32.936 real 0m1.993s 00:07:32.936 user 0m1.585s 00:07:32.936 sys 0m0.409s 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.936 03:02:03 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:32.936 ************************************ 00:07:32.936 END TEST accel_cdev_decomp 00:07:32.936 ************************************ 00:07:32.936 03:02:03 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.936 03:02:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:32.936 03:02:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.936 03:02:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.936 ************************************ 00:07:32.936 START TEST accel_cdev_decmop_full 00:07:32.936 ************************************ 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:32.936 03:02:03 accel.accel_cdev_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:32.936 [2024-05-15 03:02:03.988398] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:32.936 [2024-05-15 03:02:03.988456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015694 ] 00:07:32.936 [2024-05-15 03:02:04.086758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.194 [2024-05-15 03:02:04.176828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.763 [2024-05-15 03:02:04.741889] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:33.763 [2024-05-15 03:02:04.744202] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26816d0 PMD being used: compress_qat 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 [2024-05-15 03:02:04.747389] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26849b0 PMD being used: compress_qat 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.763 03:02:04 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:35.141 00:07:35.141 real 0m1.982s 00:07:35.141 user 0m1.592s 00:07:35.141 sys 0m0.384s 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.141 03:02:05 accel.accel_cdev_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:35.141 ************************************ 00:07:35.141 END TEST accel_cdev_decmop_full 00:07:35.141 ************************************ 00:07:35.141 03:02:05 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.141 03:02:05 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:35.141 03:02:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.141 03:02:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.141 ************************************ 00:07:35.141 START TEST accel_cdev_decomp_mcore 00:07:35.141 ************************************ 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.141 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:35.142 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:35.142 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.142 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.142 [2024-05-15 03:02:06.040965] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:35.142 [2024-05-15 03:02:06.041021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015952 ] 00:07:35.142 [2024-05-15 03:02:06.140758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.142 [2024-05-15 03:02:06.234833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.142 [2024-05-15 03:02:06.234930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.142 [2024-05-15 03:02:06.234972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.142 [2024-05-15 03:02:06.234971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.710 [2024-05-15 03:02:06.790650] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:35.710 [2024-05-15 03:02:06.792957] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x227ad20 PMD being used: compress_qat 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 [2024-05-15 03:02:06.798379] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac6419b890 PMD being used: compress_qat 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 [2024-05-15 03:02:06.798995] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac5c19b890 PMD being used: compress_qat 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 [2024-05-15 03:02:06.800260] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2280250 PMD being used: compress_qat 00:07:35.710 [2024-05-15 03:02:06.800344] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac5419b890 PMD being used: compress_qat 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 03:02:06 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:37.089 00:07:37.089 real 0m2.003s 00:07:37.089 user 0m6.534s 00:07:37.089 sys 0m0.427s 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.089 03:02:08 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 ************************************ 00:07:37.089 END TEST accel_cdev_decomp_mcore 00:07:37.089 ************************************ 00:07:37.089 03:02:08 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.089 03:02:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:37.089 03:02:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.089 03:02:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 ************************************ 00:07:37.089 START TEST accel_cdev_decomp_full_mcore 00:07:37.089 ************************************ 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:37.089 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:37.090 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.090 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.090 [2024-05-15 03:02:08.102872] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:37.090 [2024-05-15 03:02:08.102928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016416 ] 00:07:37.090 [2024-05-15 03:02:08.201163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.349 [2024-05-15 03:02:08.295441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.349 [2024-05-15 03:02:08.295535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.349 [2024-05-15 03:02:08.295642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.349 [2024-05-15 03:02:08.295643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.917 [2024-05-15 03:02:08.849637] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:37.917 [2024-05-15 03:02:08.851945] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8b9d20 PMD being used: compress_qat 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 [2024-05-15 03:02:08.856508] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f20d019b890 PMD being used: compress_qat 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.917 [2024-05-15 03:02:08.857117] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f20c819b890 PMD being used: compress_qat 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 [2024-05-15 03:02:08.858388] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8b9dc0 PMD being used: compress_qat 00:07:37.917 [2024-05-15 03:02:08.858514] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f20c019b890 PMD being used: compress_qat 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:37.917 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.918 03:02:08 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:39.293 00:07:39.293 real 0m1.997s 00:07:39.293 user 0m6.540s 00:07:39.293 sys 0m0.402s 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.293 03:02:10 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 ************************************ 00:07:39.293 END TEST accel_cdev_decomp_full_mcore 00:07:39.293 ************************************ 00:07:39.293 03:02:10 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.293 03:02:10 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:39.293 03:02:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.293 03:02:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 ************************************ 00:07:39.293 START TEST accel_cdev_decomp_mthread 00:07:39.293 ************************************ 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.293 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.293 [2024-05-15 03:02:10.169626] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:39.293 [2024-05-15 03:02:10.169677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016675 ] 00:07:39.293 [2024-05-15 03:02:10.267891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.293 [2024-05-15 03:02:10.357626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.860 [2024-05-15 03:02:10.935534] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:39.860 [2024-05-15 03:02:10.937882] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21b96d0 PMD being used: compress_qat 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 [2024-05-15 03:02:10.942703] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21be720 PMD being used: compress_qat 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 [2024-05-15 03:02:10.945083] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22e1520 PMD being used: compress_qat 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.860 03:02:10 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:41.235 00:07:41.235 real 0m1.999s 00:07:41.235 user 0m1.586s 00:07:41.235 sys 0m0.407s 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.235 03:02:12 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 ************************************ 00:07:41.235 END TEST accel_cdev_decomp_mthread 00:07:41.235 ************************************ 00:07:41.235 03:02:12 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.235 03:02:12 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:41.235 03:02:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.235 03:02:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 ************************************ 00:07:41.235 START TEST accel_cdev_decomp_full_mthread 00:07:41.235 ************************************ 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.235 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.235 [2024-05-15 03:02:12.247299] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:41.235 [2024-05-15 03:02:12.247352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017137 ] 00:07:41.235 [2024-05-15 03:02:12.345474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.494 [2024-05-15 03:02:12.436302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.061 [2024-05-15 03:02:12.995586] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:42.061 [2024-05-15 03:02:12.997917] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x27876d0 PMD being used: compress_qat 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.061 03:02:12 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.061 [2024-05-15 03:02:13.001926] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x278a9b0 PMD being used: compress_qat 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.061 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.062 [2024-05-15 03:02:13.004737] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x28af180 PMD being used: compress_qat 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.062 03:02:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.436 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:43.437 00:07:43.437 real 0m1.984s 00:07:43.437 user 0m1.574s 00:07:43.437 sys 0m0.406s 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.437 03:02:14 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:43.437 ************************************ 00:07:43.437 END TEST accel_cdev_decomp_full_mthread 00:07:43.437 ************************************ 00:07:43.437 03:02:14 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:43.437 03:02:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:43.437 03:02:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:43.437 03:02:14 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.437 03:02:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.437 03:02:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.437 03:02:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.437 03:02:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.437 03:02:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.437 03:02:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.437 03:02:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.437 03:02:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:43.437 03:02:14 accel -- accel/accel.sh@41 -- # jq -r . 00:07:43.437 ************************************ 00:07:43.437 START TEST accel_dif_functional_tests 00:07:43.437 ************************************ 00:07:43.437 03:02:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:43.437 [2024-05-15 03:02:14.330485] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:43.437 [2024-05-15 03:02:14.330536] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017395 ] 00:07:43.437 [2024-05-15 03:02:14.428137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.437 [2024-05-15 03:02:14.521521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.437 [2024-05-15 03:02:14.521618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.437 [2024-05-15 03:02:14.521623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.695 00:07:43.695 00:07:43.695 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.695 http://cunit.sourceforge.net/ 00:07:43.695 00:07:43.695 00:07:43.695 Suite: accel_dif 00:07:43.695 Test: verify: DIF generated, GUARD check ...passed 00:07:43.695 Test: verify: DIF generated, APPTAG check ...passed 00:07:43.695 Test: verify: DIF generated, REFTAG check ...passed 00:07:43.695 Test: verify: DIF not generated, GUARD check ...[2024-05-15 03:02:14.608398] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.695 [2024-05-15 03:02:14.608445] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.695 passed 00:07:43.695 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 03:02:14.608487] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.695 [2024-05-15 03:02:14.608508] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.695 passed 00:07:43.695 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 03:02:14.608531] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.695 [2024-05-15 03:02:14.608556] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.695 passed 00:07:43.695 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:43.695 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 03:02:14.608617] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:43.695 passed 00:07:43.695 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:43.695 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:43.695 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:43.695 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 03:02:14.608763] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:43.695 passed 00:07:43.695 Test: generate copy: DIF generated, GUARD check ...passed 00:07:43.695 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:43.695 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:43.695 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:43.695 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:43.695 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:43.695 Test: generate copy: iovecs-len validate ...[2024-05-15 03:02:14.609034] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:43.695 passed 00:07:43.695 Test: generate copy: buffer alignment validate ...passed 00:07:43.695 00:07:43.695 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.695 suites 1 1 n/a 0 0 00:07:43.695 tests 20 20 20 0 0 00:07:43.695 asserts 204 204 204 0 n/a 00:07:43.695 00:07:43.695 Elapsed time = 0.002 seconds 00:07:43.696 00:07:43.696 real 0m0.552s 00:07:43.696 user 0m0.766s 00:07:43.696 sys 0m0.187s 00:07:43.696 03:02:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.696 03:02:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:43.696 ************************************ 00:07:43.696 END TEST accel_dif_functional_tests 00:07:43.696 ************************************ 00:07:43.955 00:07:43.955 real 0m50.359s 00:07:43.955 user 1m0.012s 00:07:43.955 sys 0m8.970s 00:07:43.955 03:02:14 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.955 03:02:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 ************************************ 00:07:43.955 END TEST accel 00:07:43.955 ************************************ 00:07:43.955 03:02:14 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.955 03:02:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.955 03:02:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.955 03:02:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 ************************************ 00:07:43.955 START TEST accel_rpc 00:07:43.955 ************************************ 00:07:43.955 03:02:14 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.955 * Looking for test storage... 00:07:43.955 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:43.955 03:02:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:43.955 03:02:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4017671 00:07:43.955 03:02:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4017671 00:07:43.955 03:02:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:43.955 03:02:15 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 4017671 ']' 00:07:43.955 03:02:15 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.955 03:02:15 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.955 03:02:15 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.956 03:02:15 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.956 03:02:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.956 [2024-05-15 03:02:15.094104] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:43.956 [2024-05-15 03:02:15.094162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017671 ] 00:07:44.234 [2024-05-15 03:02:15.193920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.234 [2024-05-15 03:02:15.286917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.189 03:02:16 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.189 03:02:16 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:45.189 03:02:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:45.189 03:02:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:45.189 03:02:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:45.189 03:02:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:45.189 03:02:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:45.189 03:02:16 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.189 03:02:16 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.189 03:02:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.189 ************************************ 00:07:45.189 START TEST accel_assign_opcode 00:07:45.189 ************************************ 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.189 [2024-05-15 03:02:16.081356] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.189 [2024-05-15 03:02:16.089367] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:45.189 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.446 software 00:07:45.446 00:07:45.446 real 0m0.281s 00:07:45.446 user 0m0.048s 00:07:45.446 sys 0m0.007s 00:07:45.446 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.446 03:02:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.446 ************************************ 00:07:45.446 END TEST accel_assign_opcode 00:07:45.446 ************************************ 00:07:45.446 03:02:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4017671 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 4017671 ']' 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 4017671 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017671 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017671' 00:07:45.446 killing process with pid 4017671 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@965 -- # kill 4017671 00:07:45.446 03:02:16 accel_rpc -- common/autotest_common.sh@970 -- # wait 4017671 00:07:45.704 00:07:45.704 real 0m1.865s 00:07:45.704 user 0m2.010s 00:07:45.704 sys 0m0.496s 00:07:45.704 03:02:16 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.704 03:02:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.704 ************************************ 00:07:45.704 END TEST accel_rpc 00:07:45.704 ************************************ 00:07:45.704 03:02:16 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:45.704 03:02:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.704 03:02:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.704 03:02:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.961 ************************************ 00:07:45.961 START TEST app_cmdline 00:07:45.961 ************************************ 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:45.961 * Looking for test storage... 00:07:45.961 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:45.961 03:02:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:45.961 03:02:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4017983 00:07:45.961 03:02:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:45.961 03:02:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4017983 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 4017983 ']' 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.961 03:02:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.961 [2024-05-15 03:02:17.023743] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:45.961 [2024-05-15 03:02:17.023802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017983 ] 00:07:46.219 [2024-05-15 03:02:17.121045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.219 [2024-05-15 03:02:17.217112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.151 03:02:17 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.151 03:02:17 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:47.151 03:02:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:47.151 { 00:07:47.151 "version": "SPDK v24.05-pre git sha1 2b14ffc34", 00:07:47.151 "fields": { 00:07:47.151 "major": 24, 00:07:47.151 "minor": 5, 00:07:47.151 "patch": 0, 00:07:47.151 "suffix": "-pre", 00:07:47.151 "commit": "2b14ffc34" 00:07:47.151 } 00:07:47.151 } 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:47.151 03:02:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:47.151 03:02:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.151 03:02:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:47.151 03:02:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.151 03:02:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:47.152 03:02:18 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.410 request: 00:07:47.410 { 00:07:47.410 "method": "env_dpdk_get_mem_stats", 00:07:47.410 "req_id": 1 00:07:47.410 } 00:07:47.410 Got JSON-RPC error response 00:07:47.410 response: 00:07:47.410 { 00:07:47.410 "code": -32601, 00:07:47.410 "message": "Method not found" 00:07:47.410 } 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.410 03:02:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4017983 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 4017983 ']' 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 4017983 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017983 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017983' 00:07:47.410 killing process with pid 4017983 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@965 -- # kill 4017983 00:07:47.410 03:02:18 app_cmdline -- common/autotest_common.sh@970 -- # wait 4017983 00:07:47.977 00:07:47.977 real 0m1.968s 00:07:47.977 user 0m2.432s 00:07:47.977 sys 0m0.491s 00:07:47.977 03:02:18 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.977 03:02:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.977 ************************************ 00:07:47.977 END TEST app_cmdline 00:07:47.977 ************************************ 00:07:47.977 03:02:18 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:47.977 03:02:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.977 03:02:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.977 03:02:18 -- common/autotest_common.sh@10 -- # set +x 00:07:47.977 ************************************ 00:07:47.977 START TEST version 00:07:47.977 ************************************ 00:07:47.977 03:02:18 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:47.977 * Looking for test storage... 00:07:47.977 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:47.977 03:02:19 version -- app/version.sh@17 -- # get_header_version major 00:07:47.977 03:02:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # cut -f2 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.977 03:02:19 version -- app/version.sh@17 -- # major=24 00:07:47.977 03:02:19 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.977 03:02:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # cut -f2 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.977 03:02:19 version -- app/version.sh@18 -- # minor=5 00:07:47.977 03:02:19 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.977 03:02:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # cut -f2 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.977 03:02:19 version -- app/version.sh@19 -- # patch=0 00:07:47.977 03:02:19 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.977 03:02:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # cut -f2 00:07:47.977 03:02:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.977 03:02:19 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.977 03:02:19 version -- app/version.sh@22 -- # version=24.5 00:07:47.977 03:02:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.977 03:02:19 version -- app/version.sh@28 -- # version=24.5rc0 00:07:47.977 03:02:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:47.977 03:02:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.977 03:02:19 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:47.977 03:02:19 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:47.977 00:07:47.977 real 0m0.169s 00:07:47.977 user 0m0.095s 00:07:47.977 sys 0m0.111s 00:07:47.977 03:02:19 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.977 03:02:19 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.977 ************************************ 00:07:47.977 END TEST version 00:07:47.977 ************************************ 00:07:47.977 03:02:19 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:07:47.977 03:02:19 -- spdk/autotest.sh@185 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:47.977 03:02:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.977 03:02:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.977 03:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:48.236 ************************************ 00:07:48.236 START TEST blockdev_general 00:07:48.236 ************************************ 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:48.236 * Looking for test storage... 00:07:48.236 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:48.236 03:02:19 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=4018549 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:48.236 03:02:19 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 4018549 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 4018549 ']' 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.236 03:02:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.236 [2024-05-15 03:02:19.325241] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:48.236 [2024-05-15 03:02:19.325296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018549 ] 00:07:48.494 [2024-05-15 03:02:19.425313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.494 [2024-05-15 03:02:19.516738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.427 03:02:20 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.427 03:02:20 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:07:49.427 03:02:20 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:49.428 03:02:20 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:49.428 03:02:20 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:49.428 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.428 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.428 [2024-05-15 03:02:20.506565] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.428 [2024-05-15 03:02:20.506621] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.428 00:07:49.428 [2024-05-15 03:02:20.514555] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.428 [2024-05-15 03:02:20.514579] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.428 00:07:49.428 Malloc0 00:07:49.428 Malloc1 00:07:49.428 Malloc2 00:07:49.428 Malloc3 00:07:49.428 Malloc4 00:07:49.686 Malloc5 00:07:49.686 Malloc6 00:07:49.686 Malloc7 00:07:49.686 Malloc8 00:07:49.686 Malloc9 00:07:49.686 [2024-05-15 03:02:20.651245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:49.686 [2024-05-15 03:02:20.651291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.686 [2024-05-15 03:02:20.651311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x274ed70 00:07:49.686 [2024-05-15 03:02:20.651321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.686 [2024-05-15 03:02:20.652749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.686 [2024-05-15 03:02:20.652778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:49.686 TestPT 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:49.686 5000+0 records in 00:07:49.686 5000+0 records out 00:07:49.686 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017615 s, 581 MB/s 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.686 AIO0 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:49.686 03:02:20 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.686 03:02:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.945 03:02:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.945 03:02:20 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:49.945 03:02:20 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:49.946 03:02:20 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fe296213-ac09-4dda-878d-7d0da7594df6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fe296213-ac09-4dda-878d-7d0da7594df6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "64179349-8322-5d16-968b-13a44648a0d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64179349-8322-5d16-968b-13a44648a0d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "efebdf2e-65e8-5874-97d9-909e36218d84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "efebdf2e-65e8-5874-97d9-909e36218d84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5da2f539-228e-5d92-8e90-1a355b1b43c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5da2f539-228e-5d92-8e90-1a355b1b43c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c0947e49-7a55-5bc4-b948-2ea286d3331d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0947e49-7a55-5bc4-b948-2ea286d3331d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6a196d69-8511-5650-bae6-2fc65bc8c9bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a196d69-8511-5650-bae6-2fc65bc8c9bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9984f92a-6a87-5219-be45-38dd39a872fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9984f92a-6a87-5219-be45-38dd39a872fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "214a7d34-7c85-55e9-aecc-bfd5545c72a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "214a7d34-7c85-55e9-aecc-bfd5545c72a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ef75d48e-63c1-51dd-b030-8f39bc3bbcba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef75d48e-63c1-51dd-b030-8f39bc3bbcba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06468a1a-c195-5754-a085-699faee6dbd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06468a1a-c195-5754-a085-699faee6dbd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "a2e887e9-8417-4e39-99c6-4f987fa24dfd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25fe0c3f-6d12-45d4-a378-83f3a350ff20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a86c94dc-dde4-41f8-a659-14d2e20d1fd2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "714f15ed-7339-46c5-8083-8faf03204253"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "04205cd5-9fc3-4408-aca0-12c32501d66a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e4043c96-d4f4-4c1b-ada1-827f0768a1f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "82adfe73-aaf0-4534-af3e-b85e4a47872c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5be6822-0340-4898-a8cd-a6068c030427",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7f9eb1aa-c601-4117-a2fd-03a07cfd7afb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d2c3249f-4c3d-40d3-b80b-ec8afe33000b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d2c3249f-4c3d-40d3-b80b-ec8afe33000b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:49.946 03:02:20 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:49.946 03:02:20 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:49.946 03:02:20 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:49.946 03:02:20 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 4018549 00:07:49.946 03:02:20 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 4018549 ']' 00:07:49.946 03:02:20 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 4018549 00:07:49.946 03:02:20 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:07:49.946 03:02:20 blockdev_general -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.946 03:02:20 blockdev_general -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4018549 00:07:49.946 03:02:21 blockdev_general -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.946 03:02:21 blockdev_general -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.946 03:02:21 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4018549' 00:07:49.946 killing process with pid 4018549 00:07:49.946 03:02:21 blockdev_general -- common/autotest_common.sh@965 -- # kill 4018549 00:07:49.946 03:02:21 blockdev_general -- common/autotest_common.sh@970 -- # wait 4018549 00:07:50.513 03:02:21 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:50.513 03:02:21 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:50.513 03:02:21 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:50.513 03:02:21 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.513 03:02:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:50.513 ************************************ 00:07:50.513 START TEST bdev_hello_world 00:07:50.513 ************************************ 00:07:50.513 03:02:21 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:50.513 [2024-05-15 03:02:21.566273] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:50.513 [2024-05-15 03:02:21.566309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018823 ] 00:07:50.513 [2024-05-15 03:02:21.650452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.772 [2024-05-15 03:02:21.741830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.772 [2024-05-15 03:02:21.898586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.772 [2024-05-15 03:02:21.898640] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.772 [2024-05-15 03:02:21.898653] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.772 [2024-05-15 03:02:21.906592] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.772 [2024-05-15 03:02:21.906617] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.772 [2024-05-15 03:02:21.914601] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.772 [2024-05-15 03:02:21.914624] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:51.030 [2024-05-15 03:02:21.986801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:51.030 [2024-05-15 03:02:21.986857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.030 [2024-05-15 03:02:21.986873] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2630810 00:07:51.030 [2024-05-15 03:02:21.986882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.030 [2024-05-15 03:02:21.988366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.030 [2024-05-15 03:02:21.988394] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:51.030 [2024-05-15 03:02:22.123341] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:51.030 [2024-05-15 03:02:22.123404] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:51.030 [2024-05-15 03:02:22.123450] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:51.030 [2024-05-15 03:02:22.123516] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:51.030 [2024-05-15 03:02:22.123602] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:51.030 [2024-05-15 03:02:22.123626] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:51.030 [2024-05-15 03:02:22.123681] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:51.030 00:07:51.030 [2024-05-15 03:02:22.123714] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:51.289 00:07:51.289 real 0m0.921s 00:07:51.289 user 0m0.639s 00:07:51.289 sys 0m0.244s 00:07:51.289 03:02:22 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.289 03:02:22 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:51.289 ************************************ 00:07:51.289 END TEST bdev_hello_world 00:07:51.289 ************************************ 00:07:51.548 03:02:22 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:51.548 03:02:22 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:51.548 03:02:22 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.548 03:02:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:51.548 ************************************ 00:07:51.548 START TEST bdev_bounds 00:07:51.548 ************************************ 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=4019054 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 4019054' 00:07:51.548 Process bdevio pid: 4019054 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 4019054 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 4019054 ']' 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.548 03:02:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:51.548 [2024-05-15 03:02:22.567910] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:51.548 [2024-05-15 03:02:22.567962] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019054 ] 00:07:51.548 [2024-05-15 03:02:22.670354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.807 [2024-05-15 03:02:22.766407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.807 [2024-05-15 03:02:22.766431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.807 [2024-05-15 03:02:22.766435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.807 [2024-05-15 03:02:22.911236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:51.808 [2024-05-15 03:02:22.911285] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:51.808 [2024-05-15 03:02:22.911297] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:51.808 [2024-05-15 03:02:22.919245] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:51.808 [2024-05-15 03:02:22.919271] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:51.808 [2024-05-15 03:02:22.927259] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:51.808 [2024-05-15 03:02:22.927283] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:52.066 [2024-05-15 03:02:22.999799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:52.066 [2024-05-15 03:02:22.999857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.066 [2024-05-15 03:02:22.999872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdcc970 00:07:52.066 [2024-05-15 03:02:22.999882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.066 [2024-05-15 03:02:23.001464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.066 [2024-05-15 03:02:23.001493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:52.633 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.633 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:07:52.633 03:02:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:52.633 I/O targets: 00:07:52.633 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:52.633 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:52.633 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:52.633 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:52.633 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:52.633 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:52.633 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:52.633 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:52.633 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:52.633 00:07:52.633 00:07:52.633 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.633 http://cunit.sourceforge.net/ 00:07:52.633 00:07:52.633 00:07:52.633 Suite: bdevio tests on: AIO0 00:07:52.633 Test: blockdev write read block ...passed 00:07:52.633 Test: blockdev write zeroes read block ...passed 00:07:52.633 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: raid1 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: concat0 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: raid0 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: TestPT 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: Malloc2p7 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: Malloc2p6 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: Malloc2p5 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.634 Test: blockdev write zeroes read no split ...passed 00:07:52.634 Test: blockdev write zeroes read split ...passed 00:07:52.634 Test: blockdev write zeroes read split partial ...passed 00:07:52.634 Test: blockdev reset ...passed 00:07:52.634 Test: blockdev write read 8 blocks ...passed 00:07:52.634 Test: blockdev write read size > 128k ...passed 00:07:52.634 Test: blockdev write read invalid size ...passed 00:07:52.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.634 Test: blockdev write read max offset ...passed 00:07:52.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.634 Test: blockdev writev readv 8 blocks ...passed 00:07:52.634 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.634 Test: blockdev writev readv block ...passed 00:07:52.634 Test: blockdev writev readv size > 128k ...passed 00:07:52.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.634 Test: blockdev comparev and writev ...passed 00:07:52.634 Test: blockdev nvme passthru rw ...passed 00:07:52.634 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.634 Test: blockdev nvme admin passthru ...passed 00:07:52.634 Test: blockdev copy ...passed 00:07:52.634 Suite: bdevio tests on: Malloc2p4 00:07:52.634 Test: blockdev write read block ...passed 00:07:52.634 Test: blockdev write zeroes read block ...passed 00:07:52.893 Test: blockdev write zeroes read no split ...passed 00:07:52.893 Test: blockdev write zeroes read split ...passed 00:07:52.893 Test: blockdev write zeroes read split partial ...passed 00:07:52.893 Test: blockdev reset ...passed 00:07:52.893 Test: blockdev write read 8 blocks ...passed 00:07:52.893 Test: blockdev write read size > 128k ...passed 00:07:52.893 Test: blockdev write read invalid size ...passed 00:07:52.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.893 Test: blockdev write read max offset ...passed 00:07:52.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.893 Test: blockdev writev readv 8 blocks ...passed 00:07:52.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.893 Test: blockdev writev readv block ...passed 00:07:52.893 Test: blockdev writev readv size > 128k ...passed 00:07:52.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.893 Test: blockdev comparev and writev ...passed 00:07:52.893 Test: blockdev nvme passthru rw ...passed 00:07:52.893 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.893 Test: blockdev nvme admin passthru ...passed 00:07:52.893 Test: blockdev copy ...passed 00:07:52.893 Suite: bdevio tests on: Malloc2p3 00:07:52.893 Test: blockdev write read block ...passed 00:07:52.893 Test: blockdev write zeroes read block ...passed 00:07:52.893 Test: blockdev write zeroes read no split ...passed 00:07:52.893 Test: blockdev write zeroes read split ...passed 00:07:52.893 Test: blockdev write zeroes read split partial ...passed 00:07:52.893 Test: blockdev reset ...passed 00:07:52.893 Test: blockdev write read 8 blocks ...passed 00:07:52.893 Test: blockdev write read size > 128k ...passed 00:07:52.893 Test: blockdev write read invalid size ...passed 00:07:52.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.893 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc2p2 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc2p1 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc2p0 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc1p1 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc1p0 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 Suite: bdevio tests on: Malloc0 00:07:52.894 Test: blockdev write read block ...passed 00:07:52.894 Test: blockdev write zeroes read block ...passed 00:07:52.894 Test: blockdev write zeroes read no split ...passed 00:07:52.894 Test: blockdev write zeroes read split ...passed 00:07:52.894 Test: blockdev write zeroes read split partial ...passed 00:07:52.894 Test: blockdev reset ...passed 00:07:52.894 Test: blockdev write read 8 blocks ...passed 00:07:52.894 Test: blockdev write read size > 128k ...passed 00:07:52.894 Test: blockdev write read invalid size ...passed 00:07:52.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.894 Test: blockdev write read max offset ...passed 00:07:52.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.894 Test: blockdev writev readv 8 blocks ...passed 00:07:52.894 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.894 Test: blockdev writev readv block ...passed 00:07:52.894 Test: blockdev writev readv size > 128k ...passed 00:07:52.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.894 Test: blockdev comparev and writev ...passed 00:07:52.894 Test: blockdev nvme passthru rw ...passed 00:07:52.894 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.894 Test: blockdev nvme admin passthru ...passed 00:07:52.894 Test: blockdev copy ...passed 00:07:52.894 00:07:52.894 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.894 suites 16 16 n/a 0 0 00:07:52.894 tests 368 368 368 0 0 00:07:52.894 asserts 2224 2224 2224 0 n/a 00:07:52.894 00:07:52.894 Elapsed time = 0.507 seconds 00:07:52.894 0 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 4019054 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 4019054 ']' 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 4019054 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019054 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.894 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019054' 00:07:52.894 killing process with pid 4019054 00:07:52.895 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 4019054 00:07:52.895 03:02:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 4019054 00:07:53.155 03:02:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:53.155 00:07:53.155 real 0m1.709s 00:07:53.155 user 0m4.412s 00:07:53.155 sys 0m0.415s 00:07:53.155 03:02:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.155 03:02:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 ************************************ 00:07:53.155 END TEST bdev_bounds 00:07:53.155 ************************************ 00:07:53.155 03:02:24 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:53.155 03:02:24 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:53.155 03:02:24 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.155 03:02:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 ************************************ 00:07:53.155 START TEST bdev_nbd 00:07:53.155 ************************************ 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=4019337 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 4019337 /var/tmp/spdk-nbd.sock 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 4019337 ']' 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.155 03:02:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.414 [2024-05-15 03:02:24.355067] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:53.414 [2024-05-15 03:02:24.355120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.414 [2024-05-15 03:02:24.452199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.414 [2024-05-15 03:02:24.545847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.674 [2024-05-15 03:02:24.694610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:53.674 [2024-05-15 03:02:24.694665] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:53.674 [2024-05-15 03:02:24.694677] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:53.674 [2024-05-15 03:02:24.702618] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:53.674 [2024-05-15 03:02:24.702645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:53.674 [2024-05-15 03:02:24.710630] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:53.674 [2024-05-15 03:02:24.710653] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:53.674 [2024-05-15 03:02:24.782694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:53.674 [2024-05-15 03:02:24.782741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.674 [2024-05-15 03:02:24.782756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1582d00 00:07:53.674 [2024-05-15 03:02:24.782766] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.674 [2024-05-15 03:02:24.784247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.674 [2024-05-15 03:02:24.784275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.242 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.501 1+0 records in 00:07:54.501 1+0 records out 00:07:54.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244519 s, 16.8 MB/s 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.501 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.761 1+0 records in 00:07:54.761 1+0 records out 00:07:54.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257566 s, 15.9 MB/s 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.761 03:02:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.020 1+0 records in 00:07:55.020 1+0 records out 00:07:55.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251032 s, 16.3 MB/s 00:07:55.020 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:55.279 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.539 1+0 records in 00:07:55.539 1+0 records out 00:07:55.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242094 s, 16.9 MB/s 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.539 1+0 records in 00:07:55.539 1+0 records out 00:07:55.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282587 s, 14.5 MB/s 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.539 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.798 1+0 records in 00:07:55.798 1+0 records out 00:07:55.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281212 s, 14.6 MB/s 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.798 03:02:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.366 1+0 records in 00:07:56.366 1+0 records out 00:07:56.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342878 s, 11.9 MB/s 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.366 1+0 records in 00:07:56.366 1+0 records out 00:07:56.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342139 s, 12.0 MB/s 00:07:56.366 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:56.625 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.626 1+0 records in 00:07:56.626 1+0 records out 00:07:56.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358959 s, 11.4 MB/s 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.626 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.885 1+0 records in 00:07:56.885 1+0 records out 00:07:56.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455095 s, 9.0 MB/s 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.885 03:02:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.144 1+0 records in 00:07:57.144 1+0 records out 00:07:57.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034943 s, 11.7 MB/s 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.144 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.403 1+0 records in 00:07:57.403 1+0 records out 00:07:57.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406439 s, 10.1 MB/s 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.403 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.663 1+0 records in 00:07:57.663 1+0 records out 00:07:57.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343793 s, 11.9 MB/s 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.663 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.922 1+0 records in 00:07:57.922 1+0 records out 00:07:57.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389367 s, 10.5 MB/s 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:57.922 03:02:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.922 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:57.922 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:57.922 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.922 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.922 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.181 1+0 records in 00:07:58.181 1+0 records out 00:07:58.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477834 s, 8.6 MB/s 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.181 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.473 1+0 records in 00:07:58.473 1+0 records out 00:07:58.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479146 s, 8.5 MB/s 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.473 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd0", 00:07:58.733 "bdev_name": "Malloc0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd1", 00:07:58.733 "bdev_name": "Malloc1p0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd2", 00:07:58.733 "bdev_name": "Malloc1p1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd3", 00:07:58.733 "bdev_name": "Malloc2p0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd4", 00:07:58.733 "bdev_name": "Malloc2p1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd5", 00:07:58.733 "bdev_name": "Malloc2p2" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd6", 00:07:58.733 "bdev_name": "Malloc2p3" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd7", 00:07:58.733 "bdev_name": "Malloc2p4" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd8", 00:07:58.733 "bdev_name": "Malloc2p5" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd9", 00:07:58.733 "bdev_name": "Malloc2p6" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd10", 00:07:58.733 "bdev_name": "Malloc2p7" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd11", 00:07:58.733 "bdev_name": "TestPT" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd12", 00:07:58.733 "bdev_name": "raid0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd13", 00:07:58.733 "bdev_name": "concat0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd14", 00:07:58.733 "bdev_name": "raid1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd15", 00:07:58.733 "bdev_name": "AIO0" 00:07:58.733 } 00:07:58.733 ]' 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd0", 00:07:58.733 "bdev_name": "Malloc0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd1", 00:07:58.733 "bdev_name": "Malloc1p0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd2", 00:07:58.733 "bdev_name": "Malloc1p1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd3", 00:07:58.733 "bdev_name": "Malloc2p0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd4", 00:07:58.733 "bdev_name": "Malloc2p1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd5", 00:07:58.733 "bdev_name": "Malloc2p2" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd6", 00:07:58.733 "bdev_name": "Malloc2p3" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd7", 00:07:58.733 "bdev_name": "Malloc2p4" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd8", 00:07:58.733 "bdev_name": "Malloc2p5" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd9", 00:07:58.733 "bdev_name": "Malloc2p6" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd10", 00:07:58.733 "bdev_name": "Malloc2p7" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd11", 00:07:58.733 "bdev_name": "TestPT" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd12", 00:07:58.733 "bdev_name": "raid0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd13", 00:07:58.733 "bdev_name": "concat0" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd14", 00:07:58.733 "bdev_name": "raid1" 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "nbd_device": "/dev/nbd15", 00:07:58.733 "bdev_name": "AIO0" 00:07:58.733 } 00:07:58.733 ]' 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.733 03:02:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.993 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.560 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.819 03:02:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:00.078 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.338 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.597 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.856 03:02:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.115 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:01.373 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.373 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.373 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.373 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.632 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.892 03:02:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.151 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.411 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.670 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.930 03:02:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.190 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.449 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:03.709 /dev/nbd0 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.709 1+0 records in 00:08:03.709 1+0 records out 00:08:03.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226115 s, 18.1 MB/s 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.709 03:02:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:08:03.968 /dev/nbd1 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.968 1+0 records in 00:08:03.968 1+0 records out 00:08:03.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002291 s, 17.9 MB/s 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.968 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:08:04.226 /dev/nbd10 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.227 1+0 records in 00:08:04.227 1+0 records out 00:08:04.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259036 s, 15.8 MB/s 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.227 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:08:04.485 /dev/nbd11 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.485 1+0 records in 00:08:04.485 1+0 records out 00:08:04.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258389 s, 15.9 MB/s 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.485 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:08:04.755 /dev/nbd12 00:08:04.755 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.756 1+0 records in 00:08:04.756 1+0 records out 00:08:04.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232792 s, 17.6 MB/s 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:04.756 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.020 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.020 03:02:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.020 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.020 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.020 03:02:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:08:05.020 /dev/nbd13 00:08:05.020 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.020 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.020 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:05.020 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:05.020 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:05.021 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:05.021 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:05.021 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:05.021 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.279 1+0 records in 00:08:05.279 1+0 records out 00:08:05.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261086 s, 15.7 MB/s 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.279 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:08:05.279 /dev/nbd14 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.537 1+0 records in 00:08:05.537 1+0 records out 00:08:05.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239153 s, 17.1 MB/s 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.537 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:08:05.797 /dev/nbd15 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.797 1+0 records in 00:08:05.797 1+0 records out 00:08:05.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287607 s, 14.2 MB/s 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.797 03:02:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:08:06.056 /dev/nbd2 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.056 1+0 records in 00:08:06.056 1+0 records out 00:08:06.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394487 s, 10.4 MB/s 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.056 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:08:06.314 /dev/nbd3 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.314 1+0 records in 00:08:06.314 1+0 records out 00:08:06.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359313 s, 11.4 MB/s 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.314 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:08:06.573 /dev/nbd4 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.573 1+0 records in 00:08:06.573 1+0 records out 00:08:06.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349375 s, 11.7 MB/s 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.573 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:08:06.831 /dev/nbd5 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.831 1+0 records in 00:08:06.831 1+0 records out 00:08:06.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400205 s, 10.2 MB/s 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:06.831 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.832 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:06.832 03:02:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:06.832 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.832 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.832 03:02:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:08:07.089 /dev/nbd6 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.089 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.090 1+0 records in 00:08:07.090 1+0 records out 00:08:07.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350749 s, 11.7 MB/s 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.090 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:07.348 /dev/nbd7 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.348 1+0 records in 00:08:07.348 1+0 records out 00:08:07.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401962 s, 10.2 MB/s 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.348 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:07.606 /dev/nbd8 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.606 1+0 records in 00:08:07.606 1+0 records out 00:08:07.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380922 s, 10.8 MB/s 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.606 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:07.864 /dev/nbd9 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.864 1+0 records in 00:08:07.864 1+0 records out 00:08:07.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381263 s, 10.7 MB/s 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.864 03:02:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.122 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.122 { 00:08:08.122 "nbd_device": "/dev/nbd0", 00:08:08.122 "bdev_name": "Malloc0" 00:08:08.122 }, 00:08:08.122 { 00:08:08.122 "nbd_device": "/dev/nbd1", 00:08:08.122 "bdev_name": "Malloc1p0" 00:08:08.122 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd10", 00:08:08.123 "bdev_name": "Malloc1p1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd11", 00:08:08.123 "bdev_name": "Malloc2p0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd12", 00:08:08.123 "bdev_name": "Malloc2p1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd13", 00:08:08.123 "bdev_name": "Malloc2p2" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd14", 00:08:08.123 "bdev_name": "Malloc2p3" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd15", 00:08:08.123 "bdev_name": "Malloc2p4" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd2", 00:08:08.123 "bdev_name": "Malloc2p5" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd3", 00:08:08.123 "bdev_name": "Malloc2p6" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd4", 00:08:08.123 "bdev_name": "Malloc2p7" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd5", 00:08:08.123 "bdev_name": "TestPT" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd6", 00:08:08.123 "bdev_name": "raid0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd7", 00:08:08.123 "bdev_name": "concat0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd8", 00:08:08.123 "bdev_name": "raid1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd9", 00:08:08.123 "bdev_name": "AIO0" 00:08:08.123 } 00:08:08.123 ]' 00:08:08.123 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd0", 00:08:08.123 "bdev_name": "Malloc0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd1", 00:08:08.123 "bdev_name": "Malloc1p0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd10", 00:08:08.123 "bdev_name": "Malloc1p1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd11", 00:08:08.123 "bdev_name": "Malloc2p0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd12", 00:08:08.123 "bdev_name": "Malloc2p1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd13", 00:08:08.123 "bdev_name": "Malloc2p2" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd14", 00:08:08.123 "bdev_name": "Malloc2p3" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd15", 00:08:08.123 "bdev_name": "Malloc2p4" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd2", 00:08:08.123 "bdev_name": "Malloc2p5" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd3", 00:08:08.123 "bdev_name": "Malloc2p6" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd4", 00:08:08.123 "bdev_name": "Malloc2p7" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd5", 00:08:08.123 "bdev_name": "TestPT" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd6", 00:08:08.123 "bdev_name": "raid0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd7", 00:08:08.123 "bdev_name": "concat0" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd8", 00:08:08.123 "bdev_name": "raid1" 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "nbd_device": "/dev/nbd9", 00:08:08.123 "bdev_name": "AIO0" 00:08:08.123 } 00:08:08.123 ]' 00:08:08.123 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.381 /dev/nbd1 00:08:08.381 /dev/nbd10 00:08:08.381 /dev/nbd11 00:08:08.381 /dev/nbd12 00:08:08.381 /dev/nbd13 00:08:08.381 /dev/nbd14 00:08:08.381 /dev/nbd15 00:08:08.381 /dev/nbd2 00:08:08.381 /dev/nbd3 00:08:08.381 /dev/nbd4 00:08:08.381 /dev/nbd5 00:08:08.381 /dev/nbd6 00:08:08.381 /dev/nbd7 00:08:08.381 /dev/nbd8 00:08:08.381 /dev/nbd9' 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.381 /dev/nbd1 00:08:08.381 /dev/nbd10 00:08:08.381 /dev/nbd11 00:08:08.381 /dev/nbd12 00:08:08.381 /dev/nbd13 00:08:08.381 /dev/nbd14 00:08:08.381 /dev/nbd15 00:08:08.381 /dev/nbd2 00:08:08.381 /dev/nbd3 00:08:08.381 /dev/nbd4 00:08:08.381 /dev/nbd5 00:08:08.381 /dev/nbd6 00:08:08.381 /dev/nbd7 00:08:08.381 /dev/nbd8 00:08:08.381 /dev/nbd9' 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:08.381 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:08.382 256+0 records in 00:08:08.382 256+0 records out 00:08:08.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104893 s, 100 MB/s 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.382 256+0 records in 00:08:08.382 256+0 records out 00:08:08.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106488 s, 9.8 MB/s 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.382 256+0 records in 00:08:08.382 256+0 records out 00:08:08.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108366 s, 9.7 MB/s 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.382 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:08.640 256+0 records in 00:08:08.640 256+0 records out 00:08:08.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107728 s, 9.7 MB/s 00:08:08.640 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.640 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:08.640 256+0 records in 00:08:08.640 256+0 records out 00:08:08.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108001 s, 9.7 MB/s 00:08:08.640 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.640 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:08.898 256+0 records in 00:08:08.898 256+0 records out 00:08:08.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107722 s, 9.7 MB/s 00:08:08.898 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.898 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:08.898 256+0 records in 00:08:08.898 256+0 records out 00:08:08.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107142 s, 9.8 MB/s 00:08:08.898 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.898 03:02:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:09.156 256+0 records in 00:08:09.156 256+0 records out 00:08:09.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107988 s, 9.7 MB/s 00:08:09.156 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.156 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:09.156 256+0 records in 00:08:09.156 256+0 records out 00:08:09.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107514 s, 9.8 MB/s 00:08:09.156 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.156 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:09.414 256+0 records in 00:08:09.414 256+0 records out 00:08:09.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107402 s, 9.8 MB/s 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:09.414 256+0 records in 00:08:09.414 256+0 records out 00:08:09.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107737 s, 9.7 MB/s 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:09.414 256+0 records in 00:08:09.414 256+0 records out 00:08:09.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107744 s, 9.7 MB/s 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.414 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:09.672 256+0 records in 00:08:09.672 256+0 records out 00:08:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10798 s, 9.7 MB/s 00:08:09.672 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.672 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:09.672 256+0 records in 00:08:09.672 256+0 records out 00:08:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108365 s, 9.7 MB/s 00:08:09.672 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.672 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:09.930 256+0 records in 00:08:09.930 256+0 records out 00:08:09.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10905 s, 9.6 MB/s 00:08:09.930 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.930 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:09.930 256+0 records in 00:08:09.930 256+0 records out 00:08:09.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112006 s, 9.4 MB/s 00:08:09.930 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.930 03:02:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:10.187 256+0 records in 00:08:10.187 256+0 records out 00:08:10.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106444 s, 9.9 MB/s 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.187 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.445 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.446 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.704 03:02:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.961 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.218 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.476 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.733 03:02:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.997 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.264 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.519 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:12.774 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.775 03:02:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.030 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:13.286 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:13.286 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.287 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.544 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.802 03:02:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:14.059 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.316 malloc_lvol_verify 00:08:14.316 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.574 e330310a-36fb-4eb2-9590-530ce8046c8a 00:08:14.574 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.831 f4b98db8-5aa8-46fb-a24d-a01219322ed0 00:08:14.831 03:02:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:15.088 /dev/nbd0 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:15.088 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.088 Discarding device blocks: 0/4096 done 00:08:15.088 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:15.088 00:08:15.088 Allocating group tables: 0/1 done 00:08:15.088 Writing inode tables: 0/1 done 00:08:15.088 Creating journal (1024 blocks): done 00:08:15.088 Writing superblocks and filesystem accounting information: 0/1 done 00:08:15.088 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.088 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.345 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 4019337 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 4019337 ']' 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 4019337 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019337 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019337' 00:08:15.346 killing process with pid 4019337 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@965 -- # kill 4019337 00:08:15.346 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@970 -- # wait 4019337 00:08:15.911 03:02:46 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:15.911 00:08:15.911 real 0m22.527s 00:08:15.911 user 0m31.377s 00:08:15.911 sys 0m9.976s 00:08:15.911 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.911 03:02:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:15.911 ************************************ 00:08:15.911 END TEST bdev_nbd 00:08:15.911 ************************************ 00:08:15.911 03:02:46 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:15.911 03:02:46 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:08:15.911 03:02:46 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:08:15.911 03:02:46 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:08:15.911 03:02:46 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:15.911 03:02:46 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.911 03:02:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:15.911 ************************************ 00:08:15.911 START TEST bdev_fio 00:08:15.911 ************************************ 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:15.911 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:08:15.911 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.912 03:02:46 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 ************************************ 00:08:15.912 START TEST bdev_fio_rw_verify 00:08:15.912 ************************************ 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.912 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:08:16.199 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:08:16.199 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:08:16.199 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:16.199 03:02:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:16.461 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:16.461 fio-3.35 00:08:16.461 Starting 16 threads 00:08:28.653 00:08:28.654 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=4024048: Wed May 15 03:02:58 2024 00:08:28.654 read: IOPS=84.2k, BW=329MiB/s (345MB/s)(3289MiB/10001msec) 00:08:28.654 slat (usec): min=3, max=3382, avg=37.32, stdev=14.87 00:08:28.654 clat (usec): min=12, max=3745, avg=310.28, stdev=134.76 00:08:28.654 lat (usec): min=26, max=3789, avg=347.59, stdev=142.41 00:08:28.654 clat percentiles (usec): 00:08:28.654 | 50.000th=[ 302], 99.000th=[ 611], 99.900th=[ 709], 99.990th=[ 914], 00:08:28.654 | 99.999th=[ 1401] 00:08:28.654 write: IOPS=132k, BW=514MiB/s (539MB/s)(5074MiB/9866msec); 0 zone resets 00:08:28.654 slat (usec): min=5, max=1275, avg=52.08, stdev=14.74 00:08:28.654 clat (usec): min=7, max=1985, avg=373.69, stdev=160.63 00:08:28.654 lat (usec): min=39, max=2117, avg=425.77, stdev=167.55 00:08:28.654 clat percentiles (usec): 00:08:28.654 | 50.000th=[ 359], 99.000th=[ 750], 99.900th=[ 865], 99.990th=[ 1090], 00:08:28.654 | 99.999th=[ 1418] 00:08:28.654 bw ( KiB/s): min=439895, max=709737, per=98.68%, avg=519652.68, stdev=3951.71, samples=304 00:08:28.654 iops : min=109973, max=177433, avg=129911.95, stdev=987.89, samples=304 00:08:28.654 lat (usec) : 10=0.01%, 20=0.01%, 50=0.25%, 100=2.72%, 250=26.99% 00:08:28.654 lat (usec) : 500=52.35%, 750=17.09%, 1000=0.58% 00:08:28.654 lat (msec) : 2=0.01%, 4=0.01% 00:08:28.654 cpu : usr=99.35%, sys=0.26%, ctx=656, majf=0, minf=3334 00:08:28.654 IO depths : 1=12.5%, 2=24.9%, 4=50.2%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.654 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.654 issued rwts: total=842065,1298852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:28.654 00:08:28.654 Run status group 0 (all jobs): 00:08:28.654 READ: bw=329MiB/s (345MB/s), 329MiB/s-329MiB/s (345MB/s-345MB/s), io=3289MiB (3449MB), run=10001-10001msec 00:08:28.654 WRITE: bw=514MiB/s (539MB/s), 514MiB/s-514MiB/s (539MB/s-539MB/s), io=5074MiB (5320MB), run=9866-9866msec 00:08:28.654 00:08:28.654 real 0m11.424s 00:08:28.654 user 2m48.921s 00:08:28.654 sys 0m1.067s 00:08:28.654 03:02:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.654 03:02:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:28.654 ************************************ 00:08:28.654 END TEST bdev_fio_rw_verify 00:08:28.654 ************************************ 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:08:28.654 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:08:28.655 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fe296213-ac09-4dda-878d-7d0da7594df6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fe296213-ac09-4dda-878d-7d0da7594df6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "64179349-8322-5d16-968b-13a44648a0d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64179349-8322-5d16-968b-13a44648a0d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "efebdf2e-65e8-5874-97d9-909e36218d84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "efebdf2e-65e8-5874-97d9-909e36218d84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5da2f539-228e-5d92-8e90-1a355b1b43c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5da2f539-228e-5d92-8e90-1a355b1b43c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c0947e49-7a55-5bc4-b948-2ea286d3331d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0947e49-7a55-5bc4-b948-2ea286d3331d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6a196d69-8511-5650-bae6-2fc65bc8c9bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a196d69-8511-5650-bae6-2fc65bc8c9bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9984f92a-6a87-5219-be45-38dd39a872fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9984f92a-6a87-5219-be45-38dd39a872fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "214a7d34-7c85-55e9-aecc-bfd5545c72a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "214a7d34-7c85-55e9-aecc-bfd5545c72a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ef75d48e-63c1-51dd-b030-8f39bc3bbcba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef75d48e-63c1-51dd-b030-8f39bc3bbcba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06468a1a-c195-5754-a085-699faee6dbd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06468a1a-c195-5754-a085-699faee6dbd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "a2e887e9-8417-4e39-99c6-4f987fa24dfd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25fe0c3f-6d12-45d4-a378-83f3a350ff20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a86c94dc-dde4-41f8-a659-14d2e20d1fd2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "714f15ed-7339-46c5-8083-8faf03204253"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "04205cd5-9fc3-4408-aca0-12c32501d66a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e4043c96-d4f4-4c1b-ada1-827f0768a1f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "82adfe73-aaf0-4534-af3e-b85e4a47872c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5be6822-0340-4898-a8cd-a6068c030427",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7f9eb1aa-c601-4117-a2fd-03a07cfd7afb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d2c3249f-4c3d-40d3-b80b-ec8afe33000b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d2c3249f-4c3d-40d3-b80b-ec8afe33000b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:28.655 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:28.655 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:08:28.655 Malloc1p0 00:08:28.655 Malloc1p1 00:08:28.655 Malloc2p0 00:08:28.655 Malloc2p1 00:08:28.655 Malloc2p2 00:08:28.655 Malloc2p3 00:08:28.655 Malloc2p4 00:08:28.655 Malloc2p5 00:08:28.655 Malloc2p6 00:08:28.655 Malloc2p7 00:08:28.655 TestPT 00:08:28.655 raid0 00:08:28.655 concat0 ]] 00:08:28.655 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fe296213-ac09-4dda-878d-7d0da7594df6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fe296213-ac09-4dda-878d-7d0da7594df6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad2a4194-f3c7-53ff-beaa-91f0a38ec32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "64179349-8322-5d16-968b-13a44648a0d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "64179349-8322-5d16-968b-13a44648a0d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "efebdf2e-65e8-5874-97d9-909e36218d84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "efebdf2e-65e8-5874-97d9-909e36218d84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5da2f539-228e-5d92-8e90-1a355b1b43c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5da2f539-228e-5d92-8e90-1a355b1b43c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c0947e49-7a55-5bc4-b948-2ea286d3331d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0947e49-7a55-5bc4-b948-2ea286d3331d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6a196d69-8511-5650-bae6-2fc65bc8c9bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a196d69-8511-5650-bae6-2fc65bc8c9bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9984f92a-6a87-5219-be45-38dd39a872fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9984f92a-6a87-5219-be45-38dd39a872fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "214a7d34-7c85-55e9-aecc-bfd5545c72a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "214a7d34-7c85-55e9-aecc-bfd5545c72a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ef75d48e-63c1-51dd-b030-8f39bc3bbcba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef75d48e-63c1-51dd-b030-8f39bc3bbcba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06468a1a-c195-5754-a085-699faee6dbd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06468a1a-c195-5754-a085-699faee6dbd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4fbf1ef8-0af3-50a4-ab1d-c6f4999757bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "a2e887e9-8417-4e39-99c6-4f987fa24dfd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a2e887e9-8417-4e39-99c6-4f987fa24dfd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25fe0c3f-6d12-45d4-a378-83f3a350ff20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a86c94dc-dde4-41f8-a659-14d2e20d1fd2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "714f15ed-7339-46c5-8083-8faf03204253"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "714f15ed-7339-46c5-8083-8faf03204253",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "04205cd5-9fc3-4408-aca0-12c32501d66a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e4043c96-d4f4-4c1b-ada1-827f0768a1f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "82adfe73-aaf0-4534-af3e-b85e4a47872c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "82adfe73-aaf0-4534-af3e-b85e4a47872c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5be6822-0340-4898-a8cd-a6068c030427",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7f9eb1aa-c601-4117-a2fd-03a07cfd7afb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d2c3249f-4c3d-40d3-b80b-ec8afe33000b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d2c3249f-4c3d-40d3-b80b-ec8afe33000b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:08:28.656 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.657 03:02:58 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:28.657 ************************************ 00:08:28.657 START TEST bdev_fio_trim 00:08:28.657 ************************************ 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:28.657 03:02:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.657 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.657 fio-3.35 00:08:28.657 Starting 14 threads 00:08:40.847 00:08:40.847 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=4026066: Wed May 15 03:03:09 2024 00:08:40.847 write: IOPS=116k, BW=454MiB/s (476MB/s)(4542MiB/10001msec); 0 zone resets 00:08:40.847 slat (usec): min=2, max=3536, avg=41.35, stdev=12.15 00:08:40.847 clat (usec): min=27, max=3953, avg=306.27, stdev=102.47 00:08:40.847 lat (usec): min=34, max=3995, avg=347.61, stdev=107.07 00:08:40.847 clat percentiles (usec): 00:08:40.847 | 50.000th=[ 297], 99.000th=[ 529], 99.900th=[ 578], 99.990th=[ 627], 00:08:40.847 | 99.999th=[ 832] 00:08:40.847 bw ( KiB/s): min=415149, max=669683, per=100.00%, avg=466088.74, stdev=4099.71, samples=266 00:08:40.847 iops : min=103786, max=167420, avg=116520.42, stdev=1024.93, samples=266 00:08:40.847 trim: IOPS=116k, BW=454MiB/s (476MB/s)(4542MiB/10001msec); 0 zone resets 00:08:40.847 slat (usec): min=4, max=414, avg=28.30, stdev= 7.73 00:08:40.847 clat (usec): min=8, max=3995, avg=343.23, stdev=112.00 00:08:40.847 lat (usec): min=24, max=4012, avg=371.52, stdev=115.38 00:08:40.847 clat percentiles (usec): 00:08:40.847 | 50.000th=[ 338], 99.000th=[ 578], 99.900th=[ 635], 99.990th=[ 685], 00:08:40.847 | 99.999th=[ 783] 00:08:40.847 bw ( KiB/s): min=415149, max=669683, per=100.00%, avg=466089.16, stdev=4099.71, samples=266 00:08:40.847 iops : min=103786, max=167420, avg=116520.53, stdev=1024.93, samples=266 00:08:40.847 lat (usec) : 10=0.01%, 20=0.01%, 50=0.03%, 100=0.62%, 250=27.08% 00:08:40.847 lat (usec) : 500=66.15%, 750=6.12%, 1000=0.01% 00:08:40.847 lat (msec) : 2=0.01%, 4=0.01% 00:08:40.847 cpu : usr=99.58%, sys=0.00%, ctx=534, majf=0, minf=951 00:08:40.847 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:40.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.847 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.847 issued rwts: total=0,1162683,1162686,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.847 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:40.847 00:08:40.847 Run status group 0 (all jobs): 00:08:40.847 WRITE: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4542MiB (4762MB), run=10001-10001msec 00:08:40.848 TRIM: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4542MiB (4762MB), run=10001-10001msec 00:08:40.848 00:08:40.848 real 0m11.671s 00:08:40.848 user 2m29.719s 00:08:40.848 sys 0m0.903s 00:08:40.848 03:03:10 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.848 03:03:10 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 ************************************ 00:08:40.848 END TEST bdev_fio_trim 00:08:40.848 ************************************ 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:08:40.848 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:08:40.848 00:08:40.848 real 0m23.449s 00:08:40.848 user 5m18.851s 00:08:40.848 sys 0m2.128s 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.848 03:03:10 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 ************************************ 00:08:40.848 END TEST bdev_fio 00:08:40.848 ************************************ 00:08:40.848 03:03:10 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:40.848 03:03:10 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:40.848 03:03:10 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:40.848 03:03:10 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.848 03:03:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 ************************************ 00:08:40.848 START TEST bdev_verify 00:08:40.848 ************************************ 00:08:40.848 03:03:10 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:40.848 [2024-05-15 03:03:10.477899] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:40.848 [2024-05-15 03:03:10.477953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028422 ] 00:08:40.848 [2024-05-15 03:03:10.577552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.848 [2024-05-15 03:03:10.671981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.848 [2024-05-15 03:03:10.671987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.848 [2024-05-15 03:03:10.822924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:40.848 [2024-05-15 03:03:10.822976] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:40.848 [2024-05-15 03:03:10.822988] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:40.848 [2024-05-15 03:03:10.830929] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:40.848 [2024-05-15 03:03:10.830954] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:40.848 [2024-05-15 03:03:10.838948] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:40.848 [2024-05-15 03:03:10.838973] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:40.848 [2024-05-15 03:03:10.911511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:40.848 [2024-05-15 03:03:10.911562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.848 [2024-05-15 03:03:10.911576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202dfc0 00:08:40.848 [2024-05-15 03:03:10.911586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.848 [2024-05-15 03:03:10.913214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.848 [2024-05-15 03:03:10.913241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:40.848 Running I/O for 5 seconds... 00:08:46.108 00:08:46.108 Latency(us) 00:08:46.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.108 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.108 Verification LBA range: start 0x0 length 0x1000 00:08:46.108 Malloc0 : 5.16 868.46 3.39 0.00 0.00 147013.03 503.22 301590.43 00:08:46.108 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.108 Verification LBA range: start 0x1000 length 0x1000 00:08:46.109 Malloc0 : 5.16 843.76 3.30 0.00 0.00 151315.24 690.47 491332.75 00:08:46.109 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x800 00:08:46.109 Malloc1p0 : 5.16 446.40 1.74 0.00 0.00 284789.81 3791.73 287609.42 00:08:46.109 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x800 length 0x800 00:08:46.109 Malloc1p0 : 5.16 446.46 1.74 0.00 0.00 284736.06 3791.73 267636.54 00:08:46.109 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x800 00:08:46.109 Malloc1p1 : 5.16 446.16 1.74 0.00 0.00 283937.15 3776.12 283614.84 00:08:46.109 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x800 length 0x800 00:08:46.109 Malloc1p1 : 5.16 446.22 1.74 0.00 0.00 283887.48 3760.52 265639.25 00:08:46.109 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p0 : 5.17 445.93 1.74 0.00 0.00 283101.76 3713.71 283614.84 00:08:46.109 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p0 : 5.17 445.99 1.74 0.00 0.00 283059.64 3713.71 263641.97 00:08:46.109 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p1 : 5.17 445.70 1.74 0.00 0.00 282264.30 3807.33 279620.27 00:08:46.109 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p1 : 5.17 445.77 1.74 0.00 0.00 282233.89 3791.73 261644.68 00:08:46.109 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p2 : 5.17 445.47 1.74 0.00 0.00 281428.91 3744.91 275625.69 00:08:46.109 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p2 : 5.17 445.54 1.74 0.00 0.00 281390.57 3729.31 259647.39 00:08:46.109 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p3 : 5.34 455.11 1.78 0.00 0.00 274753.44 3791.73 271631.12 00:08:46.109 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p3 : 5.34 455.17 1.78 0.00 0.00 274741.35 3744.91 255652.82 00:08:46.109 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p4 : 5.35 454.86 1.78 0.00 0.00 273949.25 3900.95 265639.25 00:08:46.109 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p4 : 5.35 454.92 1.78 0.00 0.00 273939.72 3838.54 250659.60 00:08:46.109 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p5 : 5.35 454.61 1.78 0.00 0.00 273207.60 3932.16 263641.97 00:08:46.109 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p5 : 5.35 454.67 1.78 0.00 0.00 273203.08 3900.95 243669.09 00:08:46.109 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p6 : 5.35 454.37 1.77 0.00 0.00 272393.31 3744.91 255652.82 00:08:46.109 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p6 : 5.35 454.43 1.78 0.00 0.00 272393.26 3744.91 238675.87 00:08:46.109 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x200 00:08:46.109 Malloc2p7 : 5.36 454.13 1.77 0.00 0.00 271606.54 3713.71 250659.60 00:08:46.109 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x200 length 0x200 00:08:46.109 Malloc2p7 : 5.35 454.19 1.77 0.00 0.00 271580.38 3682.50 236678.58 00:08:46.109 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x1000 00:08:46.109 TestPT : 5.36 431.92 1.69 0.00 0.00 282855.12 26339.23 249660.95 00:08:46.109 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x1000 length 0x1000 00:08:46.109 TestPT : 5.36 433.14 1.69 0.00 0.00 282234.60 25090.93 236678.58 00:08:46.109 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x2000 00:08:46.109 raid0 : 5.36 453.70 1.77 0.00 0.00 269394.62 3947.76 220700.28 00:08:46.109 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x2000 length 0x2000 00:08:46.109 raid0 : 5.36 453.75 1.77 0.00 0.00 269471.37 3947.76 207717.91 00:08:46.109 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x2000 00:08:46.109 concat0 : 5.36 453.46 1.77 0.00 0.00 268611.10 3776.12 213709.78 00:08:46.109 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x2000 length 0x2000 00:08:46.109 concat0 : 5.36 453.51 1.77 0.00 0.00 268673.02 3760.52 211712.49 00:08:46.109 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x1000 00:08:46.109 raid1 : 5.37 453.20 1.77 0.00 0.00 267868.18 4275.44 215707.06 00:08:46.109 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x1000 length 0x1000 00:08:46.109 raid1 : 5.37 453.26 1.77 0.00 0.00 267915.57 4400.27 215707.06 00:08:46.109 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x0 length 0x4e2 00:08:46.109 AIO0 : 5.37 453.02 1.77 0.00 0.00 267015.93 1638.40 228689.43 00:08:46.109 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:46.109 Verification LBA range: start 0x4e2 length 0x4e2 00:08:46.109 AIO0 : 5.37 453.08 1.77 0.00 0.00 267091.18 1630.60 228689.43 00:08:46.109 =================================================================================================================== 00:08:46.109 Total : 15210.37 59.42 0.00 0.00 261721.38 503.22 491332.75 00:08:46.109 00:08:46.109 real 0m6.532s 00:08:46.109 user 0m12.232s 00:08:46.109 sys 0m0.328s 00:08:46.109 03:03:16 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.109 03:03:16 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:46.109 ************************************ 00:08:46.109 END TEST bdev_verify 00:08:46.109 ************************************ 00:08:46.109 03:03:16 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.109 03:03:16 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:46.109 03:03:16 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.109 03:03:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:46.109 ************************************ 00:08:46.109 START TEST bdev_verify_big_io 00:08:46.109 ************************************ 00:08:46.109 03:03:17 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.109 [2024-05-15 03:03:17.082531] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:46.109 [2024-05-15 03:03:17.082588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029574 ] 00:08:46.109 [2024-05-15 03:03:17.178880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.419 [2024-05-15 03:03:17.270842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.419 [2024-05-15 03:03:17.270854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.419 [2024-05-15 03:03:17.414608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:46.419 [2024-05-15 03:03:17.414658] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:46.419 [2024-05-15 03:03:17.414669] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:46.419 [2024-05-15 03:03:17.422621] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:46.419 [2024-05-15 03:03:17.422644] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:46.419 [2024-05-15 03:03:17.430632] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:46.419 [2024-05-15 03:03:17.430656] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:46.419 [2024-05-15 03:03:17.503003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:46.419 [2024-05-15 03:03:17.503050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.419 [2024-05-15 03:03:17.503064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce0fc0 00:08:46.419 [2024-05-15 03:03:17.503074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.419 [2024-05-15 03:03:17.504718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.419 [2024-05-15 03:03:17.504744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:46.678 [2024-05-15 03:03:17.669628] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.670754] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.672382] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.673467] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.675107] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.676191] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.677823] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.679459] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.680370] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.681645] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.682472] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.683756] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.684584] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.685892] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.686699] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.688002] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:46.678 [2024-05-15 03:03:17.709163] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:46.678 [2024-05-15 03:03:17.710926] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:46.678 Running I/O for 5 seconds... 00:08:54.779 00:08:54.779 Latency(us) 00:08:54.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.779 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x100 00:08:54.779 Malloc0 : 5.66 158.43 9.90 0.00 0.00 790619.33 983.04 2029244.22 00:08:54.779 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x100 length 0x100 00:08:54.779 Malloc0 : 6.96 202.39 12.65 0.00 0.00 462157.06 1006.45 639132.04 00:08:54.779 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x80 00:08:54.779 Malloc1p0 : 6.89 32.50 2.03 0.00 0.00 3475810.52 1599.39 5592405.33 00:08:54.779 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x80 length 0x80 00:08:54.779 Malloc1p0 : 6.52 74.86 4.68 0.00 0.00 1558644.95 2590.23 3019898.88 00:08:54.779 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x80 00:08:54.779 Malloc1p1 : 6.95 34.53 2.16 0.00 0.00 3219748.46 1614.99 5400665.72 00:08:54.779 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x80 length 0x80 00:08:54.779 Malloc1p1 : 6.97 32.14 2.01 0.00 0.00 3619123.57 1614.99 5880014.75 00:08:54.779 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p0 : 6.29 22.89 1.43 0.00 0.00 1218792.01 663.16 2236962.13 00:08:54.779 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p0 : 6.52 22.08 1.38 0.00 0.00 1322799.83 682.67 2268918.74 00:08:54.779 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p1 : 6.29 22.89 1.43 0.00 0.00 1207089.67 659.26 2220983.83 00:08:54.779 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p1 : 6.52 22.08 1.38 0.00 0.00 1312355.16 670.96 2236962.13 00:08:54.779 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p2 : 6.44 24.85 1.55 0.00 0.00 1117592.95 651.46 2189027.23 00:08:54.779 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p2 : 6.52 22.07 1.38 0.00 0.00 1300675.85 690.47 2205005.53 00:08:54.779 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p3 : 6.44 24.85 1.55 0.00 0.00 1106766.86 655.36 2157070.63 00:08:54.779 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p3 : 6.53 22.07 1.38 0.00 0.00 1289085.49 686.57 2189027.23 00:08:54.779 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p4 : 6.44 24.84 1.55 0.00 0.00 1095904.27 647.56 2125114.03 00:08:54.779 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p4 : 6.53 22.06 1.38 0.00 0.00 1278564.03 670.96 2157070.63 00:08:54.779 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p5 : 6.44 24.84 1.55 0.00 0.00 1085362.94 651.46 2093157.42 00:08:54.779 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p5 : 6.53 22.06 1.38 0.00 0.00 1267882.38 674.86 2141092.33 00:08:54.779 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p6 : 6.44 24.83 1.55 0.00 0.00 1074771.40 655.36 2077179.12 00:08:54.779 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p6 : 6.53 22.05 1.38 0.00 0.00 1256945.68 667.06 2109135.73 00:08:54.779 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x20 00:08:54.779 Malloc2p7 : 6.44 24.83 1.55 0.00 0.00 1065055.54 655.36 2045222.52 00:08:54.779 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x20 length 0x20 00:08:54.779 Malloc2p7 : 6.53 22.05 1.38 0.00 0.00 1245538.73 678.77 2093157.42 00:08:54.779 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x100 00:08:54.779 TestPT : 6.98 36.69 2.29 0.00 0.00 2758782.70 1607.19 4921316.69 00:08:54.779 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x100 length 0x100 00:08:54.779 TestPT : 7.00 32.29 2.02 0.00 0.00 3329677.31 197731.47 4282184.66 00:08:54.779 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x200 00:08:54.779 raid0 : 6.90 39.45 2.47 0.00 0.00 2481074.31 1716.42 4761533.68 00:08:54.779 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x200 length 0x200 00:08:54.779 raid0 : 6.97 36.75 2.30 0.00 0.00 2844548.22 1693.01 5081099.70 00:08:54.779 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.779 Verification LBA range: start 0x0 length 0x200 00:08:54.779 concat0 : 6.98 43.53 2.72 0.00 0.00 2230690.06 1669.61 4569794.07 00:08:54.780 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.780 Verification LBA range: start 0x200 length 0x200 00:08:54.780 concat0 : 7.00 36.59 2.29 0.00 0.00 2780298.50 1708.62 4889360.09 00:08:54.780 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:54.780 Verification LBA range: start 0x0 length 0x100 00:08:54.780 raid1 : 6.96 58.22 3.64 0.00 0.00 1614780.34 2122.12 4346097.86 00:08:54.780 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:54.780 Verification LBA range: start 0x100 length 0x100 00:08:54.780 raid1 : 7.00 38.85 2.43 0.00 0.00 2547220.71 2168.93 4697620.48 00:08:54.780 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:54.780 Verification LBA range: start 0x0 length 0x4e 00:08:54.780 AIO0 : 6.98 63.60 3.97 0.00 0.00 875230.11 862.11 3003920.58 00:08:54.780 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:54.780 Verification LBA range: start 0x4e length 0x4e 00:08:54.780 AIO0 : 6.97 34.16 2.13 0.00 0.00 1723751.44 850.41 3195660.19 00:08:54.780 =================================================================================================================== 00:08:54.780 Total : 1326.32 82.90 0.00 0.00 1561488.54 647.56 5880014.75 00:08:54.780 00:08:54.780 real 0m8.198s 00:08:54.780 user 0m15.523s 00:08:54.780 sys 0m0.349s 00:08:54.780 03:03:25 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:54.780 03:03:25 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:54.780 ************************************ 00:08:54.780 END TEST bdev_verify_big_io 00:08:54.780 ************************************ 00:08:54.780 03:03:25 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.780 03:03:25 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:54.780 03:03:25 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:54.780 03:03:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:54.780 ************************************ 00:08:54.780 START TEST bdev_write_zeroes 00:08:54.780 ************************************ 00:08:54.780 03:03:25 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.780 [2024-05-15 03:03:25.346633] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:54.780 [2024-05-15 03:03:25.346667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030941 ] 00:08:54.780 [2024-05-15 03:03:25.430466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.780 [2024-05-15 03:03:25.520833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.780 [2024-05-15 03:03:25.666717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:54.780 [2024-05-15 03:03:25.666771] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:54.780 [2024-05-15 03:03:25.666784] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:54.780 [2024-05-15 03:03:25.674720] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:54.780 [2024-05-15 03:03:25.674746] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:54.780 [2024-05-15 03:03:25.682737] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:54.780 [2024-05-15 03:03:25.682760] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:54.780 [2024-05-15 03:03:25.755052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:54.780 [2024-05-15 03:03:25.755104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.780 [2024-05-15 03:03:25.755117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28fea10 00:08:54.780 [2024-05-15 03:03:25.755127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.780 [2024-05-15 03:03:25.756615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.780 [2024-05-15 03:03:25.756643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:55.037 Running I/O for 1 seconds... 00:08:55.969 00:08:55.969 Latency(us) 00:08:55.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.969 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc0 : 1.05 4639.42 18.12 0.00 0.00 27571.74 686.57 45687.95 00:08:55.969 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc1p0 : 1.05 4632.21 18.09 0.00 0.00 27567.28 959.63 44938.97 00:08:55.969 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc1p1 : 1.05 4625.12 18.07 0.00 0.00 27542.02 951.83 43940.33 00:08:55.969 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p0 : 1.05 4617.98 18.04 0.00 0.00 27515.65 947.93 42941.68 00:08:55.969 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p1 : 1.05 4610.95 18.01 0.00 0.00 27492.06 951.83 41943.04 00:08:55.969 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p2 : 1.06 4603.90 17.98 0.00 0.00 27468.86 963.54 40944.40 00:08:55.969 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p3 : 1.06 4596.82 17.96 0.00 0.00 27445.71 951.83 39945.75 00:08:55.969 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p4 : 1.06 4589.83 17.93 0.00 0.00 27428.94 951.83 39196.77 00:08:55.969 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p5 : 1.06 4582.86 17.90 0.00 0.00 27405.44 951.83 38198.13 00:08:55.969 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p6 : 1.06 4575.86 17.87 0.00 0.00 27382.73 947.93 37199.48 00:08:55.969 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 Malloc2p7 : 1.06 4568.94 17.85 0.00 0.00 27362.02 951.83 36200.84 00:08:55.969 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 TestPT : 1.07 4562.00 17.82 0.00 0.00 27339.44 998.64 35202.19 00:08:55.969 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 raid0 : 1.07 4554.04 17.79 0.00 0.00 27308.36 1747.63 33454.57 00:08:55.969 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 concat0 : 1.07 4546.28 17.76 0.00 0.00 27253.35 1724.22 31706.94 00:08:55.969 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 raid1 : 1.07 4536.59 17.72 0.00 0.00 27182.36 2746.27 28960.67 00:08:55.969 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.969 AIO0 : 1.07 4530.64 17.70 0.00 0.00 27074.95 1006.45 28211.69 00:08:55.969 =================================================================================================================== 00:08:55.969 Total : 73373.43 286.61 0.00 0.00 27396.31 686.57 45687.95 00:08:56.534 00:08:56.534 real 0m2.102s 00:08:56.534 user 0m1.782s 00:08:56.534 sys 0m0.275s 00:08:56.534 03:03:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:56.534 03:03:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:56.534 ************************************ 00:08:56.534 END TEST bdev_write_zeroes 00:08:56.534 ************************************ 00:08:56.534 03:03:27 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:56.534 03:03:27 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:56.534 03:03:27 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:56.534 03:03:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:56.534 ************************************ 00:08:56.534 START TEST bdev_json_nonenclosed 00:08:56.534 ************************************ 00:08:56.534 03:03:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:56.535 [2024-05-15 03:03:27.537715] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:56.535 [2024-05-15 03:03:27.537764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031206 ] 00:08:56.535 [2024-05-15 03:03:27.634692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.793 [2024-05-15 03:03:27.726576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.793 [2024-05-15 03:03:27.726638] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:56.793 [2024-05-15 03:03:27.726654] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:56.793 [2024-05-15 03:03:27.726662] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.793 00:08:56.793 real 0m0.355s 00:08:56.793 user 0m0.242s 00:08:56.793 sys 0m0.111s 00:08:56.793 03:03:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:56.793 03:03:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:56.793 ************************************ 00:08:56.793 END TEST bdev_json_nonenclosed 00:08:56.793 ************************************ 00:08:56.793 03:03:27 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:56.793 03:03:27 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:56.793 03:03:27 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:56.793 03:03:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:56.793 ************************************ 00:08:56.793 START TEST bdev_json_nonarray 00:08:56.793 ************************************ 00:08:56.793 03:03:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:57.050 [2024-05-15 03:03:27.974474] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:57.050 [2024-05-15 03:03:27.974527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031348 ] 00:08:57.050 [2024-05-15 03:03:28.073564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.050 [2024-05-15 03:03:28.165103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.050 [2024-05-15 03:03:28.165175] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:57.050 [2024-05-15 03:03:28.165192] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:57.050 [2024-05-15 03:03:28.165201] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.308 00:08:57.308 real 0m0.357s 00:08:57.308 user 0m0.232s 00:08:57.308 sys 0m0.123s 00:08:57.308 03:03:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.308 03:03:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:57.308 ************************************ 00:08:57.308 END TEST bdev_json_nonarray 00:08:57.308 ************************************ 00:08:57.308 03:03:28 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:08:57.308 03:03:28 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:08:57.308 03:03:28 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:57.308 03:03:28 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:57.308 03:03:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:57.308 ************************************ 00:08:57.308 START TEST bdev_qos 00:08:57.308 ************************************ 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=4031467 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 4031467' 00:08:57.308 Process qos testing pid: 4031467 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 4031467 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 4031467 ']' 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:57.308 03:03:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:57.308 [2024-05-15 03:03:28.410130] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:57.308 [2024-05-15 03:03:28.410185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031467 ] 00:08:57.565 [2024-05-15 03:03:28.501553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.565 [2024-05-15 03:03:28.595768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 [ 00:08:58.497 { 00:08:58.497 "name": "Malloc_0", 00:08:58.497 "aliases": [ 00:08:58.497 "12d00bcd-396a-4b72-9dd8-0974657c7f7a" 00:08:58.497 ], 00:08:58.497 "product_name": "Malloc disk", 00:08:58.497 "block_size": 512, 00:08:58.497 "num_blocks": 262144, 00:08:58.497 "uuid": "12d00bcd-396a-4b72-9dd8-0974657c7f7a", 00:08:58.497 "assigned_rate_limits": { 00:08:58.497 "rw_ios_per_sec": 0, 00:08:58.497 "rw_mbytes_per_sec": 0, 00:08:58.497 "r_mbytes_per_sec": 0, 00:08:58.497 "w_mbytes_per_sec": 0 00:08:58.497 }, 00:08:58.497 "claimed": false, 00:08:58.497 "zoned": false, 00:08:58.497 "supported_io_types": { 00:08:58.497 "read": true, 00:08:58.497 "write": true, 00:08:58.497 "unmap": true, 00:08:58.497 "write_zeroes": true, 00:08:58.497 "flush": true, 00:08:58.497 "reset": true, 00:08:58.497 "compare": false, 00:08:58.497 "compare_and_write": false, 00:08:58.497 "abort": true, 00:08:58.497 "nvme_admin": false, 00:08:58.497 "nvme_io": false 00:08:58.497 }, 00:08:58.497 "memory_domains": [ 00:08:58.497 { 00:08:58.497 "dma_device_id": "system", 00:08:58.497 "dma_device_type": 1 00:08:58.497 }, 00:08:58.497 { 00:08:58.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.497 "dma_device_type": 2 00:08:58.497 } 00:08:58.497 ], 00:08:58.497 "driver_specific": {} 00:08:58.497 } 00:08:58.497 ] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 Null_1 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 [ 00:08:58.497 { 00:08:58.497 "name": "Null_1", 00:08:58.497 "aliases": [ 00:08:58.497 "326c0799-e751-46c7-a0f3-9c0ca3cbd325" 00:08:58.497 ], 00:08:58.497 "product_name": "Null disk", 00:08:58.497 "block_size": 512, 00:08:58.497 "num_blocks": 262144, 00:08:58.497 "uuid": "326c0799-e751-46c7-a0f3-9c0ca3cbd325", 00:08:58.497 "assigned_rate_limits": { 00:08:58.497 "rw_ios_per_sec": 0, 00:08:58.497 "rw_mbytes_per_sec": 0, 00:08:58.497 "r_mbytes_per_sec": 0, 00:08:58.497 "w_mbytes_per_sec": 0 00:08:58.497 }, 00:08:58.497 "claimed": false, 00:08:58.497 "zoned": false, 00:08:58.497 "supported_io_types": { 00:08:58.497 "read": true, 00:08:58.497 "write": true, 00:08:58.497 "unmap": false, 00:08:58.497 "write_zeroes": true, 00:08:58.497 "flush": false, 00:08:58.497 "reset": true, 00:08:58.497 "compare": false, 00:08:58.497 "compare_and_write": false, 00:08:58.497 "abort": true, 00:08:58.497 "nvme_admin": false, 00:08:58.497 "nvme_io": false 00:08:58.497 }, 00:08:58.497 "driver_specific": {} 00:08:58.497 } 00:08:58.497 ] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:58.497 03:03:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:58.497 Running I/O for 60 seconds... 00:09:03.759 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 55722.08 222888.32 0.00 0.00 224256.00 0.00 0.00 ' 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=55722.08 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 55722 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=55722 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=13000 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 13000 -gt 1000 ']' 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 13000 Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 13000 IOPS Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.760 03:03:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 ************************************ 00:09:03.760 START TEST bdev_qos_iops 00:09:03.760 ************************************ 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 13000 IOPS Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=13000 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:03.760 03:03:34 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 13000.81 52003.23 0.00 0.00 52936.00 0.00 0.00 ' 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=13000.81 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 13000 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=13000 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=11700 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=14300 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 13000 -lt 11700 ']' 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 13000 -gt 14300 ']' 00:09:09.026 00:09:09.026 real 0m5.257s 00:09:09.026 user 0m0.126s 00:09:09.026 sys 0m0.035s 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.026 03:03:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:09:09.026 ************************************ 00:09:09.026 END TEST bdev_qos_iops 00:09:09.026 ************************************ 00:09:09.026 03:03:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:09:09.026 03:03:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:09.026 03:03:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:09.026 03:03:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:09.026 03:03:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:09.026 03:03:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:09.026 03:03:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 19107.99 76431.97 0.00 0.00 77824.00 0.00 0.00 ' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=77824.00 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 77824 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=77824 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=7 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 7 -lt 2 ']' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 7 Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 7 BANDWIDTH Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.291 03:03:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:14.291 ************************************ 00:09:14.291 START TEST bdev_qos_bw 00:09:14.291 ************************************ 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 7 BANDWIDTH Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=7 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:14.291 03:03:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 1792.05 7168.20 0.00 0.00 7412.00 0.00 0.00 ' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=7412.00 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 7412 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=7412 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=7168 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=6451 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=7884 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 7412 -lt 6451 ']' 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 7412 -gt 7884 ']' 00:09:19.599 00:09:19.599 real 0m5.288s 00:09:19.599 user 0m0.124s 00:09:19.599 sys 0m0.038s 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:19.599 ************************************ 00:09:19.599 END TEST bdev_qos_bw 00:09:19.599 ************************************ 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.599 03:03:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:19.599 ************************************ 00:09:19.599 START TEST bdev_qos_ro_bw 00:09:19.599 ************************************ 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:19.599 03:03:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.68 2046.73 0.00 0.00 2052.00 0.00 0.00 ' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2052.00 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2052 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2052 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2052 -lt 1843 ']' 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2052 -gt 2252 ']' 00:09:24.863 00:09:24.863 real 0m5.166s 00:09:24.863 user 0m0.117s 00:09:24.863 sys 0m0.032s 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.863 03:03:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:24.863 ************************************ 00:09:24.863 END TEST bdev_qos_ro_bw 00:09:24.863 ************************************ 00:09:24.863 03:03:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:24.863 03:03:55 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.863 03:03:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:25.430 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.430 03:03:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:09:25.430 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.430 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:25.689 00:09:25.689 Latency(us) 00:09:25.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.689 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:25.689 Malloc_0 : 26.81 18385.03 71.82 0.00 0.00 13791.13 2231.34 503316.48 00:09:25.689 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:25.689 Null_1 : 26.98 18705.24 73.07 0.00 0.00 13642.79 924.53 163777.58 00:09:25.689 =================================================================================================================== 00:09:25.689 Total : 37090.27 144.88 0.00 0.00 13716.09 924.53 503316.48 00:09:25.689 0 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 4031467 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 4031467 ']' 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 4031467 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4031467 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4031467' 00:09:25.689 killing process with pid 4031467 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 4031467 00:09:25.689 Received shutdown signal, test time was about 27.033213 seconds 00:09:25.689 00:09:25.689 Latency(us) 00:09:25.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.689 =================================================================================================================== 00:09:25.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.689 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 4031467 00:09:25.948 03:03:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:09:25.948 00:09:25.948 real 0m28.537s 00:09:25.948 user 0m29.410s 00:09:25.948 sys 0m0.695s 00:09:25.948 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.948 03:03:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 ************************************ 00:09:25.948 END TEST bdev_qos 00:09:25.948 ************************************ 00:09:25.948 03:03:56 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:25.948 03:03:56 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:25.948 03:03:56 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.948 03:03:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 ************************************ 00:09:25.948 START TEST bdev_qd_sampling 00:09:25.948 ************************************ 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=4036172 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 4036172' 00:09:25.948 Process bdev QD sampling period testing pid: 4036172 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 4036172 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 4036172 ']' 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:25.948 03:03:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:25.948 [2024-05-15 03:03:57.022501] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:25.948 [2024-05-15 03:03:57.022555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036172 ] 00:09:26.208 [2024-05-15 03:03:57.120394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.208 [2024-05-15 03:03:57.219212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.208 [2024-05-15 03:03:57.219218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.775 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:26.775 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:09:26.775 03:03:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:26.775 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.775 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 Malloc_QD 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 [ 00:09:27.034 { 00:09:27.034 "name": "Malloc_QD", 00:09:27.034 "aliases": [ 00:09:27.034 "d8b759db-1758-4a58-a30c-6e21ef27455c" 00:09:27.034 ], 00:09:27.034 "product_name": "Malloc disk", 00:09:27.034 "block_size": 512, 00:09:27.034 "num_blocks": 262144, 00:09:27.034 "uuid": "d8b759db-1758-4a58-a30c-6e21ef27455c", 00:09:27.034 "assigned_rate_limits": { 00:09:27.034 "rw_ios_per_sec": 0, 00:09:27.034 "rw_mbytes_per_sec": 0, 00:09:27.034 "r_mbytes_per_sec": 0, 00:09:27.034 "w_mbytes_per_sec": 0 00:09:27.034 }, 00:09:27.034 "claimed": false, 00:09:27.034 "zoned": false, 00:09:27.034 "supported_io_types": { 00:09:27.034 "read": true, 00:09:27.034 "write": true, 00:09:27.034 "unmap": true, 00:09:27.034 "write_zeroes": true, 00:09:27.034 "flush": true, 00:09:27.034 "reset": true, 00:09:27.034 "compare": false, 00:09:27.034 "compare_and_write": false, 00:09:27.034 "abort": true, 00:09:27.034 "nvme_admin": false, 00:09:27.034 "nvme_io": false 00:09:27.034 }, 00:09:27.034 "memory_domains": [ 00:09:27.034 { 00:09:27.034 "dma_device_id": "system", 00:09:27.034 "dma_device_type": 1 00:09:27.034 }, 00:09:27.034 { 00:09:27.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.034 "dma_device_type": 2 00:09:27.034 } 00:09:27.034 ], 00:09:27.034 "driver_specific": {} 00:09:27.034 } 00:09:27.034 ] 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:09:27.034 03:03:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.034 Running I/O for 5 seconds... 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.938 03:03:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:09:28.938 "tick_rate": 2100000000, 00:09:28.938 "ticks": 17490144849418642, 00:09:28.938 "bdevs": [ 00:09:28.938 { 00:09:28.938 "name": "Malloc_QD", 00:09:28.938 "bytes_read": 721465856, 00:09:28.938 "num_read_ops": 176132, 00:09:28.938 "bytes_written": 0, 00:09:28.938 "num_write_ops": 0, 00:09:28.938 "bytes_unmapped": 0, 00:09:28.938 "num_unmap_ops": 0, 00:09:28.938 "bytes_copied": 0, 00:09:28.938 "num_copy_ops": 0, 00:09:28.938 "read_latency_ticks": 2042382488200, 00:09:28.938 "max_read_latency_ticks": 13574286, 00:09:28.938 "min_read_latency_ticks": 247122, 00:09:28.938 "write_latency_ticks": 0, 00:09:28.938 "max_write_latency_ticks": 0, 00:09:28.938 "min_write_latency_ticks": 0, 00:09:28.938 "unmap_latency_ticks": 0, 00:09:28.938 "max_unmap_latency_ticks": 0, 00:09:28.938 "min_unmap_latency_ticks": 0, 00:09:28.938 "copy_latency_ticks": 0, 00:09:28.938 "max_copy_latency_ticks": 0, 00:09:28.938 "min_copy_latency_ticks": 0, 00:09:28.938 "io_error": {}, 00:09:28.938 "queue_depth_polling_period": 10, 00:09:28.938 "queue_depth": 512, 00:09:28.938 "io_time": 20, 00:09:28.938 "weighted_io_time": 10240 00:09:28.938 } 00:09:28.938 ] 00:09:28.938 }' 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 00:09:28.938 Latency(us) 00:09:28.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.938 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:28.938 Malloc_QD : 1.98 45697.29 178.51 0.00 0.00 5587.03 1661.81 7427.41 00:09:28.938 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:28.938 Malloc_QD : 1.98 46798.25 182.81 0.00 0.00 5455.46 1614.99 6491.18 00:09:28.938 =================================================================================================================== 00:09:28.938 Total : 92495.54 361.31 0.00 0.00 5520.41 1614.99 7427.41 00:09:28.938 0 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 4036172 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 4036172 ']' 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 4036172 00:09:28.938 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4036172 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4036172' 00:09:29.196 killing process with pid 4036172 00:09:29.196 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 4036172 00:09:29.196 Received shutdown signal, test time was about 2.052743 seconds 00:09:29.196 00:09:29.196 Latency(us) 00:09:29.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.196 =================================================================================================================== 00:09:29.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.197 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 4036172 00:09:29.455 03:04:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:09:29.455 00:09:29.455 real 0m3.397s 00:09:29.455 user 0m6.712s 00:09:29.455 sys 0m0.337s 00:09:29.455 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:29.455 03:04:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:29.455 ************************************ 00:09:29.455 END TEST bdev_qd_sampling 00:09:29.455 ************************************ 00:09:29.455 03:04:00 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:09:29.455 03:04:00 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:29.455 03:04:00 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:29.455 03:04:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:29.455 ************************************ 00:09:29.455 START TEST bdev_error 00:09:29.455 ************************************ 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=4036819 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 4036819' 00:09:29.455 Process error testing pid: 4036819 00:09:29.455 03:04:00 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 4036819 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 4036819 ']' 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:29.455 03:04:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.455 [2024-05-15 03:04:00.485356] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:29.455 [2024-05-15 03:04:00.485409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036819 ] 00:09:29.455 [2024-05-15 03:04:00.578810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.714 [2024-05-15 03:04:00.670111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:09:30.281 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 Dev_1 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 [ 00:09:30.281 { 00:09:30.281 "name": "Dev_1", 00:09:30.281 "aliases": [ 00:09:30.281 "b767bd91-7a7c-44a6-8ed0-c79d23251fd4" 00:09:30.281 ], 00:09:30.281 "product_name": "Malloc disk", 00:09:30.281 "block_size": 512, 00:09:30.281 "num_blocks": 262144, 00:09:30.281 "uuid": "b767bd91-7a7c-44a6-8ed0-c79d23251fd4", 00:09:30.281 "assigned_rate_limits": { 00:09:30.281 "rw_ios_per_sec": 0, 00:09:30.281 "rw_mbytes_per_sec": 0, 00:09:30.281 "r_mbytes_per_sec": 0, 00:09:30.281 "w_mbytes_per_sec": 0 00:09:30.281 }, 00:09:30.281 "claimed": false, 00:09:30.281 "zoned": false, 00:09:30.281 "supported_io_types": { 00:09:30.281 "read": true, 00:09:30.281 "write": true, 00:09:30.281 "unmap": true, 00:09:30.281 "write_zeroes": true, 00:09:30.281 "flush": true, 00:09:30.281 "reset": true, 00:09:30.281 "compare": false, 00:09:30.281 "compare_and_write": false, 00:09:30.281 "abort": true, 00:09:30.281 "nvme_admin": false, 00:09:30.281 "nvme_io": false 00:09:30.281 }, 00:09:30.281 "memory_domains": [ 00:09:30.281 { 00:09:30.281 "dma_device_id": "system", 00:09:30.281 "dma_device_type": 1 00:09:30.281 }, 00:09:30.281 { 00:09:30.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.281 "dma_device_type": 2 00:09:30.281 } 00:09:30.281 ], 00:09:30.281 "driver_specific": {} 00:09:30.281 } 00:09:30.281 ] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:09:30.281 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 true 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.281 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 Dev_2 00:09:30.539 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.539 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.540 [ 00:09:30.540 { 00:09:30.540 "name": "Dev_2", 00:09:30.540 "aliases": [ 00:09:30.540 "c4eeca4a-871d-467c-a533-184a3446baf8" 00:09:30.540 ], 00:09:30.540 "product_name": "Malloc disk", 00:09:30.540 "block_size": 512, 00:09:30.540 "num_blocks": 262144, 00:09:30.540 "uuid": "c4eeca4a-871d-467c-a533-184a3446baf8", 00:09:30.540 "assigned_rate_limits": { 00:09:30.540 "rw_ios_per_sec": 0, 00:09:30.540 "rw_mbytes_per_sec": 0, 00:09:30.540 "r_mbytes_per_sec": 0, 00:09:30.540 "w_mbytes_per_sec": 0 00:09:30.540 }, 00:09:30.540 "claimed": false, 00:09:30.540 "zoned": false, 00:09:30.540 "supported_io_types": { 00:09:30.540 "read": true, 00:09:30.540 "write": true, 00:09:30.540 "unmap": true, 00:09:30.540 "write_zeroes": true, 00:09:30.540 "flush": true, 00:09:30.540 "reset": true, 00:09:30.540 "compare": false, 00:09:30.540 "compare_and_write": false, 00:09:30.540 "abort": true, 00:09:30.540 "nvme_admin": false, 00:09:30.540 "nvme_io": false 00:09:30.540 }, 00:09:30.540 "memory_domains": [ 00:09:30.540 { 00:09:30.540 "dma_device_id": "system", 00:09:30.540 "dma_device_type": 1 00:09:30.540 }, 00:09:30.540 { 00:09:30.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.540 "dma_device_type": 2 00:09:30.540 } 00:09:30.540 ], 00:09:30.540 "driver_specific": {} 00:09:30.540 } 00:09:30.540 ] 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:09:30.540 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:30.540 03:04:01 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.540 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:09:30.540 03:04:01 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:30.540 Running I/O for 5 seconds... 00:09:31.477 03:04:02 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 4036819 00:09:31.477 03:04:02 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 4036819' 00:09:31.477 Process is existed as continue on error is set. Pid: 4036819 00:09:31.477 03:04:02 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.477 03:04:02 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:31.477 03:04:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.477 03:04:02 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:09:31.477 Timeout while waiting for response: 00:09:31.477 00:09:31.477 00:09:35.677 00:09:35.677 Latency(us) 00:09:35.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.677 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:35.677 EE_Dev_1 : 0.89 33910.04 132.46 5.63 0.00 467.55 142.38 760.69 00:09:35.677 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:35.677 Dev_2 : 5.00 73954.51 288.88 0.00 0.00 212.17 71.19 19848.05 00:09:35.677 =================================================================================================================== 00:09:35.677 Total : 107864.54 421.35 5.63 0.00 231.40 71.19 19848.05 00:09:36.612 03:04:07 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 4036819 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 4036819 ']' 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 4036819 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4036819 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4036819' 00:09:36.612 killing process with pid 4036819 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 4036819 00:09:36.612 Received shutdown signal, test time was about 5.000000 seconds 00:09:36.612 00:09:36.612 Latency(us) 00:09:36.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.612 =================================================================================================================== 00:09:36.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:36.612 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 4036819 00:09:36.871 03:04:07 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=4037959 00:09:36.871 03:04:07 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 4037959' 00:09:36.871 Process error testing pid: 4037959 00:09:36.871 03:04:07 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:36.871 03:04:07 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 4037959 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 4037959 ']' 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:36.871 03:04:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:36.871 [2024-05-15 03:04:07.892846] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:36.871 [2024-05-15 03:04:07.892908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037959 ] 00:09:36.871 [2024-05-15 03:04:07.984871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.157 [2024-05-15 03:04:08.080153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.723 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:37.723 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:09:37.723 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:37.723 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.723 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.982 Dev_1 00:09:37.982 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.982 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:09:37.982 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:09:37.982 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:37.982 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 [ 00:09:37.983 { 00:09:37.983 "name": "Dev_1", 00:09:37.983 "aliases": [ 00:09:37.983 "a6e0b9ff-aa0a-456a-a764-f3339a0748ef" 00:09:37.983 ], 00:09:37.983 "product_name": "Malloc disk", 00:09:37.983 "block_size": 512, 00:09:37.983 "num_blocks": 262144, 00:09:37.983 "uuid": "a6e0b9ff-aa0a-456a-a764-f3339a0748ef", 00:09:37.983 "assigned_rate_limits": { 00:09:37.983 "rw_ios_per_sec": 0, 00:09:37.983 "rw_mbytes_per_sec": 0, 00:09:37.983 "r_mbytes_per_sec": 0, 00:09:37.983 "w_mbytes_per_sec": 0 00:09:37.983 }, 00:09:37.983 "claimed": false, 00:09:37.983 "zoned": false, 00:09:37.983 "supported_io_types": { 00:09:37.983 "read": true, 00:09:37.983 "write": true, 00:09:37.983 "unmap": true, 00:09:37.983 "write_zeroes": true, 00:09:37.983 "flush": true, 00:09:37.983 "reset": true, 00:09:37.983 "compare": false, 00:09:37.983 "compare_and_write": false, 00:09:37.983 "abort": true, 00:09:37.983 "nvme_admin": false, 00:09:37.983 "nvme_io": false 00:09:37.983 }, 00:09:37.983 "memory_domains": [ 00:09:37.983 { 00:09:37.983 "dma_device_id": "system", 00:09:37.983 "dma_device_type": 1 00:09:37.983 }, 00:09:37.983 { 00:09:37.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.983 "dma_device_type": 2 00:09:37.983 } 00:09:37.983 ], 00:09:37.983 "driver_specific": {} 00:09:37.983 } 00:09:37.983 ] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 true 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 Dev_2 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 [ 00:09:37.983 { 00:09:37.983 "name": "Dev_2", 00:09:37.983 "aliases": [ 00:09:37.983 "de44247c-3809-4c8e-8508-252b767898fd" 00:09:37.983 ], 00:09:37.983 "product_name": "Malloc disk", 00:09:37.983 "block_size": 512, 00:09:37.983 "num_blocks": 262144, 00:09:37.983 "uuid": "de44247c-3809-4c8e-8508-252b767898fd", 00:09:37.983 "assigned_rate_limits": { 00:09:37.983 "rw_ios_per_sec": 0, 00:09:37.983 "rw_mbytes_per_sec": 0, 00:09:37.983 "r_mbytes_per_sec": 0, 00:09:37.983 "w_mbytes_per_sec": 0 00:09:37.983 }, 00:09:37.983 "claimed": false, 00:09:37.983 "zoned": false, 00:09:37.983 "supported_io_types": { 00:09:37.983 "read": true, 00:09:37.983 "write": true, 00:09:37.983 "unmap": true, 00:09:37.983 "write_zeroes": true, 00:09:37.983 "flush": true, 00:09:37.983 "reset": true, 00:09:37.983 "compare": false, 00:09:37.983 "compare_and_write": false, 00:09:37.983 "abort": true, 00:09:37.983 "nvme_admin": false, 00:09:37.983 "nvme_io": false 00:09:37.983 }, 00:09:37.983 "memory_domains": [ 00:09:37.983 { 00:09:37.983 "dma_device_id": "system", 00:09:37.983 "dma_device_type": 1 00:09:37.983 }, 00:09:37.983 { 00:09:37.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.983 "dma_device_type": 2 00:09:37.983 } 00:09:37.983 ], 00:09:37.983 "driver_specific": {} 00:09:37.983 } 00:09:37.983 ] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 4037959 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:09:37.983 03:04:08 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4037959 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.983 03:04:08 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 4037959 00:09:37.983 Running I/O for 5 seconds... 00:09:37.983 task offset: 41064 on job bdev=EE_Dev_1 fails 00:09:37.983 00:09:37.983 Latency(us) 00:09:37.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.983 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:37.983 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:37.983 EE_Dev_1 : 0.00 27127.00 105.96 6165.23 0.00 402.65 142.38 713.87 00:09:37.983 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:37.983 Dev_2 : 0.00 16806.72 65.65 0.00 0.00 711.53 139.46 1326.32 00:09:37.983 =================================================================================================================== 00:09:37.983 Total : 43933.73 171.62 6165.23 0.00 570.18 139.46 1326.32 00:09:37.983 [2024-05-15 03:04:09.111209] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.983 request: 00:09:37.983 { 00:09:37.983 "method": "perform_tests", 00:09:37.983 "req_id": 1 00:09:37.983 } 00:09:37.983 Got JSON-RPC error response 00:09:37.983 response: 00:09:37.983 { 00:09:37.983 "code": -32603, 00:09:37.983 "message": "bdevperf failed with error Operation not permitted" 00:09:37.983 } 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:38.242 00:09:38.242 real 0m8.964s 00:09:38.242 user 0m9.481s 00:09:38.242 sys 0m0.693s 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.242 03:04:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:38.242 ************************************ 00:09:38.242 END TEST bdev_error 00:09:38.242 ************************************ 00:09:38.501 03:04:09 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:09:38.501 03:04:09 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:38.501 03:04:09 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.501 03:04:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:38.501 ************************************ 00:09:38.501 START TEST bdev_stat 00:09:38.501 ************************************ 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=4038268 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 4038268' 00:09:38.501 Process Bdev IO statistics testing pid: 4038268 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 4038268 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 4038268 ']' 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.501 03:04:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:38.501 [2024-05-15 03:04:09.531139] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:38.501 [2024-05-15 03:04:09.531195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038268 ] 00:09:38.501 [2024-05-15 03:04:09.627413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.760 [2024-05-15 03:04:09.725810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.760 [2024-05-15 03:04:09.725817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:39.695 Malloc_STAT 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:39.695 [ 00:09:39.695 { 00:09:39.695 "name": "Malloc_STAT", 00:09:39.695 "aliases": [ 00:09:39.695 "7ee31b0c-d7ae-4688-ab1a-887e70136371" 00:09:39.695 ], 00:09:39.695 "product_name": "Malloc disk", 00:09:39.695 "block_size": 512, 00:09:39.695 "num_blocks": 262144, 00:09:39.695 "uuid": "7ee31b0c-d7ae-4688-ab1a-887e70136371", 00:09:39.695 "assigned_rate_limits": { 00:09:39.695 "rw_ios_per_sec": 0, 00:09:39.695 "rw_mbytes_per_sec": 0, 00:09:39.695 "r_mbytes_per_sec": 0, 00:09:39.695 "w_mbytes_per_sec": 0 00:09:39.695 }, 00:09:39.695 "claimed": false, 00:09:39.695 "zoned": false, 00:09:39.695 "supported_io_types": { 00:09:39.695 "read": true, 00:09:39.695 "write": true, 00:09:39.695 "unmap": true, 00:09:39.695 "write_zeroes": true, 00:09:39.695 "flush": true, 00:09:39.695 "reset": true, 00:09:39.695 "compare": false, 00:09:39.695 "compare_and_write": false, 00:09:39.695 "abort": true, 00:09:39.695 "nvme_admin": false, 00:09:39.695 "nvme_io": false 00:09:39.695 }, 00:09:39.695 "memory_domains": [ 00:09:39.695 { 00:09:39.695 "dma_device_id": "system", 00:09:39.695 "dma_device_type": 1 00:09:39.695 }, 00:09:39.695 { 00:09:39.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.695 "dma_device_type": 2 00:09:39.695 } 00:09:39.695 ], 00:09:39.695 "driver_specific": {} 00:09:39.695 } 00:09:39.695 ] 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:09:39.695 03:04:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.695 Running I/O for 10 seconds... 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:09:41.598 "tick_rate": 2100000000, 00:09:41.598 "ticks": 17490171208454584, 00:09:41.598 "bdevs": [ 00:09:41.598 { 00:09:41.598 "name": "Malloc_STAT", 00:09:41.598 "bytes_read": 704688640, 00:09:41.598 "num_read_ops": 172036, 00:09:41.598 "bytes_written": 0, 00:09:41.598 "num_write_ops": 0, 00:09:41.598 "bytes_unmapped": 0, 00:09:41.598 "num_unmap_ops": 0, 00:09:41.598 "bytes_copied": 0, 00:09:41.598 "num_copy_ops": 0, 00:09:41.598 "read_latency_ticks": 2026903115746, 00:09:41.598 "max_read_latency_ticks": 13682838, 00:09:41.598 "min_read_latency_ticks": 238338, 00:09:41.598 "write_latency_ticks": 0, 00:09:41.598 "max_write_latency_ticks": 0, 00:09:41.598 "min_write_latency_ticks": 0, 00:09:41.598 "unmap_latency_ticks": 0, 00:09:41.598 "max_unmap_latency_ticks": 0, 00:09:41.598 "min_unmap_latency_ticks": 0, 00:09:41.598 "copy_latency_ticks": 0, 00:09:41.598 "max_copy_latency_ticks": 0, 00:09:41.598 "min_copy_latency_ticks": 0, 00:09:41.598 "io_error": {} 00:09:41.598 } 00:09:41.598 ] 00:09:41.598 }' 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=172036 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:09:41.598 "tick_rate": 2100000000, 00:09:41.598 "ticks": 17490171351601088, 00:09:41.598 "name": "Malloc_STAT", 00:09:41.598 "channels": [ 00:09:41.598 { 00:09:41.598 "thread_id": 2, 00:09:41.598 "bytes_read": 361758720, 00:09:41.598 "num_read_ops": 88320, 00:09:41.598 "bytes_written": 0, 00:09:41.598 "num_write_ops": 0, 00:09:41.598 "bytes_unmapped": 0, 00:09:41.598 "num_unmap_ops": 0, 00:09:41.598 "bytes_copied": 0, 00:09:41.598 "num_copy_ops": 0, 00:09:41.598 "read_latency_ticks": 1049225626008, 00:09:41.598 "max_read_latency_ticks": 12649342, 00:09:41.598 "min_read_latency_ticks": 7842036, 00:09:41.598 "write_latency_ticks": 0, 00:09:41.598 "max_write_latency_ticks": 0, 00:09:41.598 "min_write_latency_ticks": 0, 00:09:41.598 "unmap_latency_ticks": 0, 00:09:41.598 "max_unmap_latency_ticks": 0, 00:09:41.598 "min_unmap_latency_ticks": 0, 00:09:41.598 "copy_latency_ticks": 0, 00:09:41.598 "max_copy_latency_ticks": 0, 00:09:41.598 "min_copy_latency_ticks": 0 00:09:41.598 }, 00:09:41.598 { 00:09:41.598 "thread_id": 3, 00:09:41.598 "bytes_read": 368050176, 00:09:41.598 "num_read_ops": 89856, 00:09:41.598 "bytes_written": 0, 00:09:41.598 "num_write_ops": 0, 00:09:41.598 "bytes_unmapped": 0, 00:09:41.598 "num_unmap_ops": 0, 00:09:41.598 "bytes_copied": 0, 00:09:41.598 "num_copy_ops": 0, 00:09:41.598 "read_latency_ticks": 1050169525336, 00:09:41.598 "max_read_latency_ticks": 13682838, 00:09:41.598 "min_read_latency_ticks": 7806086, 00:09:41.598 "write_latency_ticks": 0, 00:09:41.598 "max_write_latency_ticks": 0, 00:09:41.598 "min_write_latency_ticks": 0, 00:09:41.598 "unmap_latency_ticks": 0, 00:09:41.598 "max_unmap_latency_ticks": 0, 00:09:41.598 "min_unmap_latency_ticks": 0, 00:09:41.598 "copy_latency_ticks": 0, 00:09:41.598 "max_copy_latency_ticks": 0, 00:09:41.598 "min_copy_latency_ticks": 0 00:09:41.598 } 00:09:41.598 ] 00:09:41.598 }' 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=88320 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=88320 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=89856 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=178176 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.598 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:41.857 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.857 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:09:41.857 "tick_rate": 2100000000, 00:09:41.857 "ticks": 17490171600273270, 00:09:41.857 "bdevs": [ 00:09:41.857 { 00:09:41.857 "name": "Malloc_STAT", 00:09:41.857 "bytes_read": 773894656, 00:09:41.857 "num_read_ops": 188932, 00:09:41.857 "bytes_written": 0, 00:09:41.857 "num_write_ops": 0, 00:09:41.857 "bytes_unmapped": 0, 00:09:41.857 "num_unmap_ops": 0, 00:09:41.857 "bytes_copied": 0, 00:09:41.857 "num_copy_ops": 0, 00:09:41.857 "read_latency_ticks": 2226132607622, 00:09:41.857 "max_read_latency_ticks": 13682838, 00:09:41.857 "min_read_latency_ticks": 238338, 00:09:41.857 "write_latency_ticks": 0, 00:09:41.857 "max_write_latency_ticks": 0, 00:09:41.857 "min_write_latency_ticks": 0, 00:09:41.857 "unmap_latency_ticks": 0, 00:09:41.857 "max_unmap_latency_ticks": 0, 00:09:41.857 "min_unmap_latency_ticks": 0, 00:09:41.857 "copy_latency_ticks": 0, 00:09:41.857 "max_copy_latency_ticks": 0, 00:09:41.857 "min_copy_latency_ticks": 0, 00:09:41.857 "io_error": {} 00:09:41.857 } 00:09:41.857 ] 00:09:41.857 }' 00:09:41.857 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:09:41.857 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=188932 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 178176 -lt 172036 ']' 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 178176 -gt 188932 ']' 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:41.858 00:09:41.858 Latency(us) 00:09:41.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.858 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:41.858 Malloc_STAT : 2.15 45187.60 176.51 0.00 0.00 5650.93 1490.16 6054.28 00:09:41.858 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:41.858 Malloc_STAT : 2.16 45961.04 179.54 0.00 0.00 5556.50 1076.66 6522.39 00:09:41.858 =================================================================================================================== 00:09:41.858 Total : 91148.64 356.05 0.00 0.00 5603.28 1076.66 6522.39 00:09:41.858 0 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 4038268 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 4038268 ']' 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 4038268 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4038268 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4038268' 00:09:41.858 killing process with pid 4038268 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 4038268 00:09:41.858 Received shutdown signal, test time was about 2.229729 seconds 00:09:41.858 00:09:41.858 Latency(us) 00:09:41.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.858 =================================================================================================================== 00:09:41.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.858 03:04:12 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 4038268 00:09:42.116 03:04:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:09:42.116 00:09:42.116 real 0m3.659s 00:09:42.116 user 0m7.432s 00:09:42.116 sys 0m0.391s 00:09:42.116 03:04:13 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.116 03:04:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 ************************************ 00:09:42.116 END TEST bdev_stat 00:09:42.116 ************************************ 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:42.116 03:04:13 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:42.116 00:09:42.116 real 1m54.014s 00:09:42.116 user 7m21.133s 00:09:42.116 sys 0m17.345s 00:09:42.116 03:04:13 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.116 03:04:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 ************************************ 00:09:42.116 END TEST blockdev_general 00:09:42.116 ************************************ 00:09:42.116 03:04:13 -- spdk/autotest.sh@186 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:42.116 03:04:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:42.116 03:04:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.116 03:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 ************************************ 00:09:42.116 START TEST bdev_raid 00:09:42.116 ************************************ 00:09:42.116 03:04:13 bdev_raid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:42.375 * Looking for test storage... 00:09:42.375 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:42.375 03:04:13 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@800 -- # trap 'on_error_exit;' ERR 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@802 -- # base_blocklen=512 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@804 -- # uname -s 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@804 -- # '[' Linux = Linux ']' 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@804 -- # modprobe -n nbd 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@805 -- # has_nbd=true 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@806 -- # modprobe nbd 00:09:42.375 03:04:13 bdev_raid -- bdev/bdev_raid.sh@807 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:42.375 03:04:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:42.375 03:04:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.375 03:04:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.375 ************************************ 00:09:42.375 START TEST raid_function_test_raid0 00:09:42.375 ************************************ 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1121 -- # raid_function_test raid0 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # raid_pid=4038979 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 4038979' 00:09:42.375 Process raid pid: 4038979 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@88 -- # waitforlisten 4038979 /var/tmp/spdk-raid.sock 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@827 -- # '[' -z 4038979 ']' 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:42.375 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:42.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:42.376 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:42.376 03:04:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [2024-05-15 03:04:13.451568] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:42.376 [2024-05-15 03:04:13.451619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.641 [2024-05-15 03:04:13.552333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.641 [2024-05-15 03:04:13.646041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.641 [2024-05-15 03:04:13.705130] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.641 [2024-05-15 03:04:13.705158] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # return 0 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # cat 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:43.576 [2024-05-15 03:04:14.666277] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:43.576 [2024-05-15 03:04:14.667783] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:43.576 [2024-05-15 03:04:14.667844] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x202f8b0 00:09:43.576 [2024-05-15 03:04:14.667861] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:43.576 [2024-05-15 03:04:14.668059] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e92ba0 00:09:43.576 [2024-05-15 03:04:14.668185] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x202f8b0 00:09:43.576 [2024-05-15 03:04:14.668194] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x202f8b0 00:09:43.576 [2024-05-15 03:04:14.668300] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.576 Base_1 00:09:43.576 Base_2 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:43.576 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:43.835 03:04:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:44.093 [2024-05-15 03:04:15.187689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e75440 00:09:44.093 /dev/nbd0 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # local i 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # break 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.093 1+0 records in 00:09:44.093 1+0 records out 00:09:44.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230353 s, 17.8 MB/s 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # size=4096 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # return 0 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:44.093 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:44.351 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:44.351 { 00:09:44.351 "nbd_device": "/dev/nbd0", 00:09:44.351 "bdev_name": "raid" 00:09:44.351 } 00:09:44.351 ]' 00:09:44.351 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:44.351 { 00:09:44.351 "nbd_device": "/dev/nbd0", 00:09:44.351 "bdev_name": "raid" 00:09:44.351 } 00:09:44.351 ]' 00:09:44.351 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # count=1 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local blksize 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:09:44.609 4096+0 records in 00:09:44.609 4096+0 records out 00:09:44.609 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0254772 s, 82.3 MB/s 00:09:44.609 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:44.868 4096+0 records in 00:09:44.868 4096+0 records out 00:09:44.868 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.256162 s, 8.2 MB/s 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:44.868 128+0 records in 00:09:44.868 128+0 records out 00:09:44.868 65536 bytes (66 kB, 64 KiB) copied, 0.000363479 s, 180 MB/s 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:44.868 2035+0 records in 00:09:44.868 2035+0 records out 00:09:44.868 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00486123 s, 214 MB/s 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:44.868 456+0 records in 00:09:44.868 456+0 records out 00:09:44.868 233472 bytes (233 kB, 228 KiB) copied, 0.00114103 s, 205 MB/s 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@53 -- # return 0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.868 03:04:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.127 [2024-05-15 03:04:16.200790] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:45.127 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # count=0 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@111 -- # killprocess 4038979 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@946 -- # '[' -z 4038979 ']' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # kill -0 4038979 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # uname 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:45.385 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4038979 00:09:45.644 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:45.644 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:45.644 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4038979' 00:09:45.644 killing process with pid 4038979 00:09:45.644 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@965 -- # kill 4038979 00:09:45.644 [2024-05-15 03:04:16.566240] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.644 [2024-05-15 03:04:16.566312] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.644 [2024-05-15 03:04:16.566355] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.644 [2024-05-15 03:04:16.566365] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x202f8b0 name raid, state offline 00:09:45.644 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # wait 4038979 00:09:45.644 [2024-05-15 03:04:16.582764] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.903 03:04:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@113 -- # return 0 00:09:45.903 00:09:45.903 real 0m3.413s 00:09:45.903 user 0m4.747s 00:09:45.903 sys 0m1.014s 00:09:45.903 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:45.903 03:04:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 ************************************ 00:09:45.903 END TEST raid_function_test_raid0 00:09:45.903 ************************************ 00:09:45.903 03:04:16 bdev_raid -- bdev/bdev_raid.sh@808 -- # run_test raid_function_test_concat raid_function_test concat 00:09:45.903 03:04:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:45.903 03:04:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:45.903 03:04:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 ************************************ 00:09:45.903 START TEST raid_function_test_concat 00:09:45.903 ************************************ 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1121 -- # raid_function_test concat 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # raid_pid=4039738 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 4039738' 00:09:45.903 Process raid pid: 4039738 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@88 -- # waitforlisten 4039738 /var/tmp/spdk-raid.sock 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@827 -- # '[' -z 4039738 ']' 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:45.903 03:04:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:45.903 [2024-05-15 03:04:16.937438] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:45.903 [2024-05-15 03:04:16.937493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.903 [2024-05-15 03:04:17.038258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.161 [2024-05-15 03:04:17.127502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.161 [2024-05-15 03:04:17.197375] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.161 [2024-05-15 03:04:17.197408] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # return 0 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # cat 00:09:46.727 03:04:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:46.986 [2024-05-15 03:04:17.995044] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:46.986 [2024-05-15 03:04:17.996525] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:46.986 [2024-05-15 03:04:17.996587] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x20d98b0 00:09:46.986 [2024-05-15 03:04:17.996596] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:46.986 [2024-05-15 03:04:17.996791] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3cba0 00:09:46.986 [2024-05-15 03:04:17.996926] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20d98b0 00:09:46.986 [2024-05-15 03:04:17.996935] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x20d98b0 00:09:46.986 [2024-05-15 03:04:17.997039] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.986 Base_1 00:09:46.986 Base_2 00:09:46.986 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:46.986 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:46.986 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:47.246 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:47.504 [2024-05-15 03:04:18.512446] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f1f440 00:09:47.504 /dev/nbd0 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # local i 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # break 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.504 1+0 records in 00:09:47.504 1+0 records out 00:09:47.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218352 s, 18.8 MB/s 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # size=4096 00:09:47.504 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # return 0 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:47.505 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:47.763 { 00:09:47.763 "nbd_device": "/dev/nbd0", 00:09:47.763 "bdev_name": "raid" 00:09:47.763 } 00:09:47.763 ]' 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:47.763 { 00:09:47.763 "nbd_device": "/dev/nbd0", 00:09:47.763 "bdev_name": "raid" 00:09:47.763 } 00:09:47.763 ]' 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # count=1 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local blksize 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # blksize=512 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:09:47.763 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:09:48.022 4096+0 records in 00:09:48.022 4096+0 records out 00:09:48.022 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0235652 s, 89.0 MB/s 00:09:48.022 03:04:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:48.022 4096+0 records in 00:09:48.022 4096+0 records out 00:09:48.022 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.242965 s, 8.6 MB/s 00:09:48.022 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:09:48.022 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:48.279 128+0 records in 00:09:48.279 128+0 records out 00:09:48.279 65536 bytes (66 kB, 64 KiB) copied, 0.000397408 s, 165 MB/s 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:48.279 2035+0 records in 00:09:48.279 2035+0 records out 00:09:48.279 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.002482 s, 420 MB/s 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:48.279 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:48.280 456+0 records in 00:09:48.280 456+0 records out 00:09:48.280 233472 bytes (233 kB, 228 KiB) copied, 0.00115699 s, 202 MB/s 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@53 -- # return 0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.280 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:48.537 [2024-05-15 03:04:19.506539] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:48.537 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # count=0 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@111 -- # killprocess 4039738 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@946 -- # '[' -z 4039738 ']' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # kill -0 4039738 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # uname 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4039738 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4039738' 00:09:48.795 killing process with pid 4039738 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@965 -- # kill 4039738 00:09:48.795 [2024-05-15 03:04:19.879332] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.795 [2024-05-15 03:04:19.879400] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.795 [2024-05-15 03:04:19.879443] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.795 [2024-05-15 03:04:19.879453] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20d98b0 name raid, state offline 00:09:48.795 03:04:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # wait 4039738 00:09:48.795 [2024-05-15 03:04:19.895802] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.053 03:04:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@113 -- # return 0 00:09:49.053 00:09:49.053 real 0m3.242s 00:09:49.053 user 0m4.446s 00:09:49.053 sys 0m0.980s 00:09:49.053 03:04:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.054 03:04:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:49.054 ************************************ 00:09:49.054 END TEST raid_function_test_concat 00:09:49.054 ************************************ 00:09:49.054 03:04:20 bdev_raid -- bdev/bdev_raid.sh@811 -- # run_test raid0_resize_test raid0_resize_test 00:09:49.054 03:04:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:49.054 03:04:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.054 03:04:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.054 ************************************ 00:09:49.054 START TEST raid0_resize_test 00:09:49.054 ************************************ 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=4040281 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 4040281' 00:09:49.054 Process raid pid: 4040281 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 4040281 /var/tmp/spdk-raid.sock 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 4040281 ']' 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:49.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:49.054 03:04:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.311 [2024-05-15 03:04:20.260180] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:49.311 [2024-05-15 03:04:20.260238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.311 [2024-05-15 03:04:20.358000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.311 [2024-05-15 03:04:20.450090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.568 [2024-05-15 03:04:20.511018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.568 [2024-05-15 03:04:20.511051] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.138 03:04:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:50.138 03:04:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:09:50.138 03:04:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:50.400 Base_1 00:09:50.401 03:04:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:50.662 Base_2 00:09:50.662 03:04:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:50.920 [2024-05-15 03:04:21.932049] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:50.920 [2024-05-15 03:04:21.933604] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:50.920 [2024-05-15 03:04:21.933654] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xe8d4c0 00:09:50.920 [2024-05-15 03:04:21.933662] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:50.920 [2024-05-15 03:04:21.933882] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcdd680 00:09:50.920 [2024-05-15 03:04:21.933982] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe8d4c0 00:09:50.920 [2024-05-15 03:04:21.933990] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0xe8d4c0 00:09:50.920 [2024-05-15 03:04:21.934100] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.920 03:04:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:51.178 [2024-05-15 03:04:22.184703] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.178 [2024-05-15 03:04:22.184723] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:51.178 true 00:09:51.178 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:51.178 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:09:51.436 [2024-05-15 03:04:22.437508] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.436 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:09:51.436 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:09:51.436 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:09:51.436 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:51.702 [2024-05-15 03:04:22.682009] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.702 [2024-05-15 03:04:22.682029] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:51.702 [2024-05-15 03:04:22.682050] bdev_raid.c:2243:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:51.702 true 00:09:51.702 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:51.702 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:09:52.003 [2024-05-15 03:04:22.922792] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 4040281 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 4040281 ']' 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 4040281 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4040281 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4040281' 00:09:52.003 killing process with pid 4040281 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 4040281 00:09:52.003 [2024-05-15 03:04:22.989840] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.003 [2024-05-15 03:04:22.989906] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.003 [2024-05-15 03:04:22.989948] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.003 [2024-05-15 03:04:22.989957] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe8d4c0 name Raid, state offline 00:09:52.003 03:04:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 4040281 00:09:52.003 [2024-05-15 03:04:22.991214] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.262 03:04:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:52.262 00:09:52.262 real 0m2.998s 00:09:52.262 user 0m4.748s 00:09:52.262 sys 0m0.536s 00:09:52.262 03:04:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.262 03:04:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.262 ************************************ 00:09:52.262 END TEST raid0_resize_test 00:09:52.262 ************************************ 00:09:52.262 03:04:23 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:09:52.262 03:04:23 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:09:52.262 03:04:23 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:52.262 03:04:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:52.262 03:04:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.262 03:04:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.262 ************************************ 00:09:52.262 START TEST raid_state_function_test 00:09:52.262 ************************************ 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4040811 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4040811' 00:09:52.262 Process raid pid: 4040811 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4040811 /var/tmp/spdk-raid.sock 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4040811 ']' 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:52.262 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.262 [2024-05-15 03:04:23.336777] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:52.262 [2024-05-15 03:04:23.336832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.533 [2024-05-15 03:04:23.428608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.533 [2024-05-15 03:04:23.519048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.533 [2024-05-15 03:04:23.580392] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.533 [2024-05-15 03:04:23.580425] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.534 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:52.534 03:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:09:52.534 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:52.797 [2024-05-15 03:04:23.874782] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.797 [2024-05-15 03:04:23.874818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.797 [2024-05-15 03:04:23.874831] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.797 [2024-05-15 03:04:23.874841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.797 03:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.055 03:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:53.055 "name": "Existed_Raid", 00:09:53.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.055 "strip_size_kb": 64, 00:09:53.055 "state": "configuring", 00:09:53.055 "raid_level": "raid0", 00:09:53.055 "superblock": false, 00:09:53.055 "num_base_bdevs": 2, 00:09:53.055 "num_base_bdevs_discovered": 0, 00:09:53.055 "num_base_bdevs_operational": 2, 00:09:53.055 "base_bdevs_list": [ 00:09:53.055 { 00:09:53.055 "name": "BaseBdev1", 00:09:53.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.055 "is_configured": false, 00:09:53.055 "data_offset": 0, 00:09:53.055 "data_size": 0 00:09:53.055 }, 00:09:53.055 { 00:09:53.055 "name": "BaseBdev2", 00:09:53.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.055 "is_configured": false, 00:09:53.055 "data_offset": 0, 00:09:53.055 "data_size": 0 00:09:53.055 } 00:09:53.055 ] 00:09:53.055 }' 00:09:53.055 03:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:53.055 03:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 03:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:53.879 [2024-05-15 03:04:24.921441] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.879 [2024-05-15 03:04:24.921467] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2171dc0 name Existed_Raid, state configuring 00:09:53.879 03:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:54.137 [2024-05-15 03:04:25.178137] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.137 [2024-05-15 03:04:25.178161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.137 [2024-05-15 03:04:25.178169] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.137 [2024-05-15 03:04:25.178177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.137 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.395 [2024-05-15 03:04:25.444273] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.395 BaseBdev1 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:54.395 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:54.653 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.911 [ 00:09:54.911 { 00:09:54.911 "name": "BaseBdev1", 00:09:54.911 "aliases": [ 00:09:54.912 "86164d7e-31d9-439f-923e-51e63f1d78f7" 00:09:54.912 ], 00:09:54.912 "product_name": "Malloc disk", 00:09:54.912 "block_size": 512, 00:09:54.912 "num_blocks": 65536, 00:09:54.912 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:54.912 "assigned_rate_limits": { 00:09:54.912 "rw_ios_per_sec": 0, 00:09:54.912 "rw_mbytes_per_sec": 0, 00:09:54.912 "r_mbytes_per_sec": 0, 00:09:54.912 "w_mbytes_per_sec": 0 00:09:54.912 }, 00:09:54.912 "claimed": true, 00:09:54.912 "claim_type": "exclusive_write", 00:09:54.912 "zoned": false, 00:09:54.912 "supported_io_types": { 00:09:54.912 "read": true, 00:09:54.912 "write": true, 00:09:54.912 "unmap": true, 00:09:54.912 "write_zeroes": true, 00:09:54.912 "flush": true, 00:09:54.912 "reset": true, 00:09:54.912 "compare": false, 00:09:54.912 "compare_and_write": false, 00:09:54.912 "abort": true, 00:09:54.912 "nvme_admin": false, 00:09:54.912 "nvme_io": false 00:09:54.912 }, 00:09:54.912 "memory_domains": [ 00:09:54.912 { 00:09:54.912 "dma_device_id": "system", 00:09:54.912 "dma_device_type": 1 00:09:54.912 }, 00:09:54.912 { 00:09:54.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.912 "dma_device_type": 2 00:09:54.912 } 00:09:54.912 ], 00:09:54.912 "driver_specific": {} 00:09:54.912 } 00:09:54.912 ] 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.912 03:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.170 03:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:55.170 "name": "Existed_Raid", 00:09:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.170 "strip_size_kb": 64, 00:09:55.170 "state": "configuring", 00:09:55.170 "raid_level": "raid0", 00:09:55.170 "superblock": false, 00:09:55.170 "num_base_bdevs": 2, 00:09:55.170 "num_base_bdevs_discovered": 1, 00:09:55.170 "num_base_bdevs_operational": 2, 00:09:55.170 "base_bdevs_list": [ 00:09:55.170 { 00:09:55.170 "name": "BaseBdev1", 00:09:55.170 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:55.170 "is_configured": true, 00:09:55.170 "data_offset": 0, 00:09:55.170 "data_size": 65536 00:09:55.170 }, 00:09:55.170 { 00:09:55.170 "name": "BaseBdev2", 00:09:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.170 "is_configured": false, 00:09:55.170 "data_offset": 0, 00:09:55.170 "data_size": 0 00:09:55.170 } 00:09:55.170 ] 00:09:55.170 }' 00:09:55.170 03:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:55.170 03:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.735 03:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:55.992 [2024-05-15 03:04:27.084679] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.992 [2024-05-15 03:04:27.084716] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2172060 name Existed_Raid, state configuring 00:09:55.992 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:56.247 [2024-05-15 03:04:27.341384] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.247 [2024-05-15 03:04:27.342960] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.247 [2024-05-15 03:04:27.342990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.247 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.505 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:56.505 "name": "Existed_Raid", 00:09:56.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.505 "strip_size_kb": 64, 00:09:56.505 "state": "configuring", 00:09:56.505 "raid_level": "raid0", 00:09:56.505 "superblock": false, 00:09:56.505 "num_base_bdevs": 2, 00:09:56.505 "num_base_bdevs_discovered": 1, 00:09:56.505 "num_base_bdevs_operational": 2, 00:09:56.505 "base_bdevs_list": [ 00:09:56.505 { 00:09:56.505 "name": "BaseBdev1", 00:09:56.505 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:56.505 "is_configured": true, 00:09:56.505 "data_offset": 0, 00:09:56.505 "data_size": 65536 00:09:56.505 }, 00:09:56.505 { 00:09:56.505 "name": "BaseBdev2", 00:09:56.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.505 "is_configured": false, 00:09:56.505 "data_offset": 0, 00:09:56.505 "data_size": 0 00:09:56.505 } 00:09:56.505 ] 00:09:56.505 }' 00:09:56.505 03:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:56.505 03:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.437 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.437 [2024-05-15 03:04:28.387345] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.437 [2024-05-15 03:04:28.387377] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x21716b0 00:09:57.437 [2024-05-15 03:04:28.387388] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:57.437 [2024-05-15 03:04:28.387584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2171c70 00:09:57.437 [2024-05-15 03:04:28.387707] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21716b0 00:09:57.437 [2024-05-15 03:04:28.387716] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21716b0 00:09:57.437 [2024-05-15 03:04:28.387885] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.437 BaseBdev2 00:09:57.437 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:57.437 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:57.437 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:57.438 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:57.438 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:57.438 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:57.438 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.695 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.953 [ 00:09:57.953 { 00:09:57.953 "name": "BaseBdev2", 00:09:57.953 "aliases": [ 00:09:57.953 "59c47199-7144-4be6-9fc4-418193c106e2" 00:09:57.953 ], 00:09:57.953 "product_name": "Malloc disk", 00:09:57.953 "block_size": 512, 00:09:57.953 "num_blocks": 65536, 00:09:57.953 "uuid": "59c47199-7144-4be6-9fc4-418193c106e2", 00:09:57.953 "assigned_rate_limits": { 00:09:57.953 "rw_ios_per_sec": 0, 00:09:57.953 "rw_mbytes_per_sec": 0, 00:09:57.953 "r_mbytes_per_sec": 0, 00:09:57.953 "w_mbytes_per_sec": 0 00:09:57.953 }, 00:09:57.953 "claimed": true, 00:09:57.953 "claim_type": "exclusive_write", 00:09:57.953 "zoned": false, 00:09:57.953 "supported_io_types": { 00:09:57.953 "read": true, 00:09:57.953 "write": true, 00:09:57.953 "unmap": true, 00:09:57.953 "write_zeroes": true, 00:09:57.953 "flush": true, 00:09:57.953 "reset": true, 00:09:57.953 "compare": false, 00:09:57.953 "compare_and_write": false, 00:09:57.953 "abort": true, 00:09:57.953 "nvme_admin": false, 00:09:57.953 "nvme_io": false 00:09:57.953 }, 00:09:57.953 "memory_domains": [ 00:09:57.953 { 00:09:57.953 "dma_device_id": "system", 00:09:57.953 "dma_device_type": 1 00:09:57.953 }, 00:09:57.953 { 00:09:57.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.953 "dma_device_type": 2 00:09:57.953 } 00:09:57.953 ], 00:09:57.953 "driver_specific": {} 00:09:57.953 } 00:09:57.953 ] 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.953 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.954 03:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.211 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:58.211 "name": "Existed_Raid", 00:09:58.211 "uuid": "d5c1ea40-30b3-479c-84e1-b97097edadca", 00:09:58.211 "strip_size_kb": 64, 00:09:58.211 "state": "online", 00:09:58.211 "raid_level": "raid0", 00:09:58.211 "superblock": false, 00:09:58.211 "num_base_bdevs": 2, 00:09:58.211 "num_base_bdevs_discovered": 2, 00:09:58.211 "num_base_bdevs_operational": 2, 00:09:58.211 "base_bdevs_list": [ 00:09:58.211 { 00:09:58.211 "name": "BaseBdev1", 00:09:58.211 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:58.211 "is_configured": true, 00:09:58.211 "data_offset": 0, 00:09:58.211 "data_size": 65536 00:09:58.211 }, 00:09:58.211 { 00:09:58.211 "name": "BaseBdev2", 00:09:58.211 "uuid": "59c47199-7144-4be6-9fc4-418193c106e2", 00:09:58.211 "is_configured": true, 00:09:58.211 "data_offset": 0, 00:09:58.211 "data_size": 65536 00:09:58.211 } 00:09:58.211 ] 00:09:58.211 }' 00:09:58.211 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:58.211 03:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:58.776 03:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:59.035 [2024-05-15 03:04:30.023987] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:59.035 "name": "Existed_Raid", 00:09:59.035 "aliases": [ 00:09:59.035 "d5c1ea40-30b3-479c-84e1-b97097edadca" 00:09:59.035 ], 00:09:59.035 "product_name": "Raid Volume", 00:09:59.035 "block_size": 512, 00:09:59.035 "num_blocks": 131072, 00:09:59.035 "uuid": "d5c1ea40-30b3-479c-84e1-b97097edadca", 00:09:59.035 "assigned_rate_limits": { 00:09:59.035 "rw_ios_per_sec": 0, 00:09:59.035 "rw_mbytes_per_sec": 0, 00:09:59.035 "r_mbytes_per_sec": 0, 00:09:59.035 "w_mbytes_per_sec": 0 00:09:59.035 }, 00:09:59.035 "claimed": false, 00:09:59.035 "zoned": false, 00:09:59.035 "supported_io_types": { 00:09:59.035 "read": true, 00:09:59.035 "write": true, 00:09:59.035 "unmap": true, 00:09:59.035 "write_zeroes": true, 00:09:59.035 "flush": true, 00:09:59.035 "reset": true, 00:09:59.035 "compare": false, 00:09:59.035 "compare_and_write": false, 00:09:59.035 "abort": false, 00:09:59.035 "nvme_admin": false, 00:09:59.035 "nvme_io": false 00:09:59.035 }, 00:09:59.035 "memory_domains": [ 00:09:59.035 { 00:09:59.035 "dma_device_id": "system", 00:09:59.035 "dma_device_type": 1 00:09:59.035 }, 00:09:59.035 { 00:09:59.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.035 "dma_device_type": 2 00:09:59.035 }, 00:09:59.035 { 00:09:59.035 "dma_device_id": "system", 00:09:59.035 "dma_device_type": 1 00:09:59.035 }, 00:09:59.035 { 00:09:59.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.035 "dma_device_type": 2 00:09:59.035 } 00:09:59.035 ], 00:09:59.035 "driver_specific": { 00:09:59.035 "raid": { 00:09:59.035 "uuid": "d5c1ea40-30b3-479c-84e1-b97097edadca", 00:09:59.035 "strip_size_kb": 64, 00:09:59.035 "state": "online", 00:09:59.035 "raid_level": "raid0", 00:09:59.035 "superblock": false, 00:09:59.035 "num_base_bdevs": 2, 00:09:59.035 "num_base_bdevs_discovered": 2, 00:09:59.035 "num_base_bdevs_operational": 2, 00:09:59.035 "base_bdevs_list": [ 00:09:59.035 { 00:09:59.035 "name": "BaseBdev1", 00:09:59.035 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:59.035 "is_configured": true, 00:09:59.035 "data_offset": 0, 00:09:59.035 "data_size": 65536 00:09:59.035 }, 00:09:59.035 { 00:09:59.035 "name": "BaseBdev2", 00:09:59.035 "uuid": "59c47199-7144-4be6-9fc4-418193c106e2", 00:09:59.035 "is_configured": true, 00:09:59.035 "data_offset": 0, 00:09:59.035 "data_size": 65536 00:09:59.035 } 00:09:59.035 ] 00:09:59.035 } 00:09:59.035 } 00:09:59.035 }' 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:59.035 BaseBdev2' 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:59.035 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:59.293 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:59.293 "name": "BaseBdev1", 00:09:59.293 "aliases": [ 00:09:59.293 "86164d7e-31d9-439f-923e-51e63f1d78f7" 00:09:59.293 ], 00:09:59.293 "product_name": "Malloc disk", 00:09:59.293 "block_size": 512, 00:09:59.293 "num_blocks": 65536, 00:09:59.293 "uuid": "86164d7e-31d9-439f-923e-51e63f1d78f7", 00:09:59.293 "assigned_rate_limits": { 00:09:59.293 "rw_ios_per_sec": 0, 00:09:59.293 "rw_mbytes_per_sec": 0, 00:09:59.293 "r_mbytes_per_sec": 0, 00:09:59.293 "w_mbytes_per_sec": 0 00:09:59.293 }, 00:09:59.293 "claimed": true, 00:09:59.293 "claim_type": "exclusive_write", 00:09:59.293 "zoned": false, 00:09:59.293 "supported_io_types": { 00:09:59.293 "read": true, 00:09:59.293 "write": true, 00:09:59.293 "unmap": true, 00:09:59.293 "write_zeroes": true, 00:09:59.293 "flush": true, 00:09:59.293 "reset": true, 00:09:59.293 "compare": false, 00:09:59.293 "compare_and_write": false, 00:09:59.293 "abort": true, 00:09:59.293 "nvme_admin": false, 00:09:59.293 "nvme_io": false 00:09:59.293 }, 00:09:59.293 "memory_domains": [ 00:09:59.293 { 00:09:59.293 "dma_device_id": "system", 00:09:59.293 "dma_device_type": 1 00:09:59.293 }, 00:09:59.293 { 00:09:59.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.293 "dma_device_type": 2 00:09:59.293 } 00:09:59.293 ], 00:09:59.293 "driver_specific": {} 00:09:59.293 }' 00:09:59.293 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:59.293 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:59.293 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:59.293 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:59.550 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:59.808 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:59.808 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:59.808 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:59.808 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:00.066 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:00.066 "name": "BaseBdev2", 00:10:00.066 "aliases": [ 00:10:00.066 "59c47199-7144-4be6-9fc4-418193c106e2" 00:10:00.066 ], 00:10:00.066 "product_name": "Malloc disk", 00:10:00.066 "block_size": 512, 00:10:00.066 "num_blocks": 65536, 00:10:00.066 "uuid": "59c47199-7144-4be6-9fc4-418193c106e2", 00:10:00.066 "assigned_rate_limits": { 00:10:00.066 "rw_ios_per_sec": 0, 00:10:00.066 "rw_mbytes_per_sec": 0, 00:10:00.066 "r_mbytes_per_sec": 0, 00:10:00.066 "w_mbytes_per_sec": 0 00:10:00.066 }, 00:10:00.066 "claimed": true, 00:10:00.066 "claim_type": "exclusive_write", 00:10:00.066 "zoned": false, 00:10:00.066 "supported_io_types": { 00:10:00.066 "read": true, 00:10:00.066 "write": true, 00:10:00.066 "unmap": true, 00:10:00.066 "write_zeroes": true, 00:10:00.066 "flush": true, 00:10:00.066 "reset": true, 00:10:00.066 "compare": false, 00:10:00.066 "compare_and_write": false, 00:10:00.066 "abort": true, 00:10:00.066 "nvme_admin": false, 00:10:00.066 "nvme_io": false 00:10:00.066 }, 00:10:00.066 "memory_domains": [ 00:10:00.066 { 00:10:00.066 "dma_device_id": "system", 00:10:00.066 "dma_device_type": 1 00:10:00.066 }, 00:10:00.066 { 00:10:00.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.066 "dma_device_type": 2 00:10:00.066 } 00:10:00.066 ], 00:10:00.066 "driver_specific": {} 00:10:00.066 }' 00:10:00.066 03:04:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.066 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.324 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.324 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.324 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.324 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:00.324 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:00.600 [2024-05-15 03:04:31.571926] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.600 [2024-05-15 03:04:31.571950] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.600 [2024-05-15 03:04:31.571988] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:00.600 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.601 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.859 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.859 "name": "Existed_Raid", 00:10:00.859 "uuid": "d5c1ea40-30b3-479c-84e1-b97097edadca", 00:10:00.859 "strip_size_kb": 64, 00:10:00.859 "state": "offline", 00:10:00.859 "raid_level": "raid0", 00:10:00.859 "superblock": false, 00:10:00.859 "num_base_bdevs": 2, 00:10:00.859 "num_base_bdevs_discovered": 1, 00:10:00.859 "num_base_bdevs_operational": 1, 00:10:00.859 "base_bdevs_list": [ 00:10:00.859 { 00:10:00.859 "name": null, 00:10:00.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.859 "is_configured": false, 00:10:00.859 "data_offset": 0, 00:10:00.859 "data_size": 65536 00:10:00.859 }, 00:10:00.859 { 00:10:00.859 "name": "BaseBdev2", 00:10:00.859 "uuid": "59c47199-7144-4be6-9fc4-418193c106e2", 00:10:00.859 "is_configured": true, 00:10:00.859 "data_offset": 0, 00:10:00.859 "data_size": 65536 00:10:00.859 } 00:10:00.859 ] 00:10:00.859 }' 00:10:00.859 03:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.859 03:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.425 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.425 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.425 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.425 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:01.684 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:01.684 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.684 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:01.995 [2024-05-15 03:04:32.960837] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.995 [2024-05-15 03:04:32.960893] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21716b0 name Existed_Raid, state offline 00:10:01.995 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.995 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.995 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.995 03:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4040811 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4040811 ']' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4040811 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4040811 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4040811' 00:10:02.254 killing process with pid 4040811 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4040811 00:10:02.254 [2024-05-15 03:04:33.289419] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.254 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4040811 00:10:02.254 [2024-05-15 03:04:33.290288] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.516 03:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:10:02.516 00:10:02.516 real 0m10.239s 00:10:02.516 user 0m18.978s 00:10:02.516 sys 0m1.540s 00:10:02.516 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:02.516 03:04:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.516 ************************************ 00:10:02.516 END TEST raid_state_function_test 00:10:02.516 ************************************ 00:10:02.516 03:04:33 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:02.516 03:04:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:02.516 03:04:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:02.516 03:04:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.516 ************************************ 00:10:02.516 START TEST raid_state_function_test_sb 00:10:02.516 ************************************ 00:10:02.516 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:10:02.516 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:10:02.517 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4042815 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4042815' 00:10:02.518 Process raid pid: 4042815 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4042815 /var/tmp/spdk-raid.sock 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4042815 ']' 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:02.518 03:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.518 [2024-05-15 03:04:33.647480] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:02.518 [2024-05-15 03:04:33.647531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.778 [2024-05-15 03:04:33.747352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.778 [2024-05-15 03:04:33.840324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.778 [2024-05-15 03:04:33.899861] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.778 [2024-05-15 03:04:33.899888] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:03.713 [2024-05-15 03:04:34.750391] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.713 [2024-05-15 03:04:34.750431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.713 [2024-05-15 03:04:34.750441] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.713 [2024-05-15 03:04:34.750450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.713 03:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.972 03:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:03.972 "name": "Existed_Raid", 00:10:03.972 "uuid": "a0600e56-22be-49b3-a0f8-624a9345e87c", 00:10:03.972 "strip_size_kb": 64, 00:10:03.972 "state": "configuring", 00:10:03.972 "raid_level": "raid0", 00:10:03.972 "superblock": true, 00:10:03.972 "num_base_bdevs": 2, 00:10:03.972 "num_base_bdevs_discovered": 0, 00:10:03.972 "num_base_bdevs_operational": 2, 00:10:03.972 "base_bdevs_list": [ 00:10:03.972 { 00:10:03.972 "name": "BaseBdev1", 00:10:03.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.972 "is_configured": false, 00:10:03.972 "data_offset": 0, 00:10:03.972 "data_size": 0 00:10:03.972 }, 00:10:03.972 { 00:10:03.972 "name": "BaseBdev2", 00:10:03.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.972 "is_configured": false, 00:10:03.972 "data_offset": 0, 00:10:03.972 "data_size": 0 00:10:03.972 } 00:10:03.972 ] 00:10:03.972 }' 00:10:03.972 03:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:03.972 03:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.538 03:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:04.797 [2024-05-15 03:04:35.889272] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.797 [2024-05-15 03:04:35.889304] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xac1dc0 name Existed_Raid, state configuring 00:10:04.797 03:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:05.055 [2024-05-15 03:04:36.137949] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.055 [2024-05-15 03:04:36.137976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.055 [2024-05-15 03:04:36.137984] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.055 [2024-05-15 03:04:36.137993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.055 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.313 [2024-05-15 03:04:36.396209] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.313 BaseBdev1 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:05.313 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:05.571 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.830 [ 00:10:05.830 { 00:10:05.830 "name": "BaseBdev1", 00:10:05.830 "aliases": [ 00:10:05.830 "0f316496-bb09-4363-9d1f-5f83950028f8" 00:10:05.830 ], 00:10:05.830 "product_name": "Malloc disk", 00:10:05.830 "block_size": 512, 00:10:05.830 "num_blocks": 65536, 00:10:05.830 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:05.830 "assigned_rate_limits": { 00:10:05.830 "rw_ios_per_sec": 0, 00:10:05.830 "rw_mbytes_per_sec": 0, 00:10:05.830 "r_mbytes_per_sec": 0, 00:10:05.830 "w_mbytes_per_sec": 0 00:10:05.830 }, 00:10:05.830 "claimed": true, 00:10:05.830 "claim_type": "exclusive_write", 00:10:05.830 "zoned": false, 00:10:05.830 "supported_io_types": { 00:10:05.830 "read": true, 00:10:05.830 "write": true, 00:10:05.830 "unmap": true, 00:10:05.830 "write_zeroes": true, 00:10:05.830 "flush": true, 00:10:05.830 "reset": true, 00:10:05.830 "compare": false, 00:10:05.830 "compare_and_write": false, 00:10:05.830 "abort": true, 00:10:05.830 "nvme_admin": false, 00:10:05.830 "nvme_io": false 00:10:05.830 }, 00:10:05.830 "memory_domains": [ 00:10:05.830 { 00:10:05.830 "dma_device_id": "system", 00:10:05.830 "dma_device_type": 1 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.830 "dma_device_type": 2 00:10:05.830 } 00:10:05.830 ], 00:10:05.830 "driver_specific": {} 00:10:05.830 } 00:10:05.830 ] 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.830 03:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.128 03:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:06.128 "name": "Existed_Raid", 00:10:06.128 "uuid": "64d74711-a3c1-4319-bcab-09c51ee03634", 00:10:06.128 "strip_size_kb": 64, 00:10:06.128 "state": "configuring", 00:10:06.128 "raid_level": "raid0", 00:10:06.128 "superblock": true, 00:10:06.128 "num_base_bdevs": 2, 00:10:06.128 "num_base_bdevs_discovered": 1, 00:10:06.128 "num_base_bdevs_operational": 2, 00:10:06.128 "base_bdevs_list": [ 00:10:06.128 { 00:10:06.128 "name": "BaseBdev1", 00:10:06.128 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:06.128 "is_configured": true, 00:10:06.128 "data_offset": 2048, 00:10:06.128 "data_size": 63488 00:10:06.128 }, 00:10:06.128 { 00:10:06.128 "name": "BaseBdev2", 00:10:06.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.128 "is_configured": false, 00:10:06.128 "data_offset": 0, 00:10:06.128 "data_size": 0 00:10:06.128 } 00:10:06.128 ] 00:10:06.128 }' 00:10:06.128 03:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:06.128 03:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.725 03:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:06.983 [2024-05-15 03:04:37.944355] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.983 [2024-05-15 03:04:37.944396] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xac2060 name Existed_Raid, state configuring 00:10:06.983 03:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:06.983 [2024-05-15 03:04:38.120864] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.983 [2024-05-15 03:04:38.122400] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.983 [2024-05-15 03:04:38.122431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.241 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.499 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:07.499 "name": "Existed_Raid", 00:10:07.499 "uuid": "a1812911-25d2-4bbc-9e20-492b8853cf2f", 00:10:07.499 "strip_size_kb": 64, 00:10:07.499 "state": "configuring", 00:10:07.499 "raid_level": "raid0", 00:10:07.499 "superblock": true, 00:10:07.499 "num_base_bdevs": 2, 00:10:07.499 "num_base_bdevs_discovered": 1, 00:10:07.499 "num_base_bdevs_operational": 2, 00:10:07.499 "base_bdevs_list": [ 00:10:07.499 { 00:10:07.499 "name": "BaseBdev1", 00:10:07.499 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:07.499 "is_configured": true, 00:10:07.499 "data_offset": 2048, 00:10:07.499 "data_size": 63488 00:10:07.499 }, 00:10:07.499 { 00:10:07.499 "name": "BaseBdev2", 00:10:07.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.499 "is_configured": false, 00:10:07.499 "data_offset": 0, 00:10:07.499 "data_size": 0 00:10:07.499 } 00:10:07.499 ] 00:10:07.499 }' 00:10:07.499 03:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:07.499 03:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.065 [2024-05-15 03:04:39.174952] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.065 [2024-05-15 03:04:39.175099] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xac16b0 00:10:08.065 [2024-05-15 03:04:39.175112] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:08.065 [2024-05-15 03:04:39.175297] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xac1c70 00:10:08.065 [2024-05-15 03:04:39.175423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xac16b0 00:10:08.065 [2024-05-15 03:04:39.175432] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xac16b0 00:10:08.065 [2024-05-15 03:04:39.175527] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.065 BaseBdev2 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:08.065 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:08.323 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.581 [ 00:10:08.581 { 00:10:08.581 "name": "BaseBdev2", 00:10:08.581 "aliases": [ 00:10:08.581 "25543a0d-cc0d-4a21-8f41-1894ed2cecea" 00:10:08.581 ], 00:10:08.581 "product_name": "Malloc disk", 00:10:08.581 "block_size": 512, 00:10:08.581 "num_blocks": 65536, 00:10:08.581 "uuid": "25543a0d-cc0d-4a21-8f41-1894ed2cecea", 00:10:08.581 "assigned_rate_limits": { 00:10:08.581 "rw_ios_per_sec": 0, 00:10:08.581 "rw_mbytes_per_sec": 0, 00:10:08.581 "r_mbytes_per_sec": 0, 00:10:08.581 "w_mbytes_per_sec": 0 00:10:08.581 }, 00:10:08.581 "claimed": true, 00:10:08.581 "claim_type": "exclusive_write", 00:10:08.581 "zoned": false, 00:10:08.581 "supported_io_types": { 00:10:08.581 "read": true, 00:10:08.581 "write": true, 00:10:08.581 "unmap": true, 00:10:08.581 "write_zeroes": true, 00:10:08.581 "flush": true, 00:10:08.581 "reset": true, 00:10:08.581 "compare": false, 00:10:08.581 "compare_and_write": false, 00:10:08.581 "abort": true, 00:10:08.581 "nvme_admin": false, 00:10:08.581 "nvme_io": false 00:10:08.581 }, 00:10:08.581 "memory_domains": [ 00:10:08.581 { 00:10:08.581 "dma_device_id": "system", 00:10:08.581 "dma_device_type": 1 00:10:08.581 }, 00:10:08.581 { 00:10:08.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.581 "dma_device_type": 2 00:10:08.581 } 00:10:08.581 ], 00:10:08.581 "driver_specific": {} 00:10:08.581 } 00:10:08.581 ] 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.581 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.839 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:08.839 "name": "Existed_Raid", 00:10:08.839 "uuid": "a1812911-25d2-4bbc-9e20-492b8853cf2f", 00:10:08.839 "strip_size_kb": 64, 00:10:08.839 "state": "online", 00:10:08.839 "raid_level": "raid0", 00:10:08.839 "superblock": true, 00:10:08.839 "num_base_bdevs": 2, 00:10:08.839 "num_base_bdevs_discovered": 2, 00:10:08.839 "num_base_bdevs_operational": 2, 00:10:08.839 "base_bdevs_list": [ 00:10:08.839 { 00:10:08.839 "name": "BaseBdev1", 00:10:08.839 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:08.839 "is_configured": true, 00:10:08.839 "data_offset": 2048, 00:10:08.839 "data_size": 63488 00:10:08.839 }, 00:10:08.839 { 00:10:08.839 "name": "BaseBdev2", 00:10:08.839 "uuid": "25543a0d-cc0d-4a21-8f41-1894ed2cecea", 00:10:08.839 "is_configured": true, 00:10:08.839 "data_offset": 2048, 00:10:08.839 "data_size": 63488 00:10:08.839 } 00:10:08.839 ] 00:10:08.839 }' 00:10:08.839 03:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:08.839 03:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:09.405 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:09.664 [2024-05-15 03:04:40.639158] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:09.664 "name": "Existed_Raid", 00:10:09.664 "aliases": [ 00:10:09.664 "a1812911-25d2-4bbc-9e20-492b8853cf2f" 00:10:09.664 ], 00:10:09.664 "product_name": "Raid Volume", 00:10:09.664 "block_size": 512, 00:10:09.664 "num_blocks": 126976, 00:10:09.664 "uuid": "a1812911-25d2-4bbc-9e20-492b8853cf2f", 00:10:09.664 "assigned_rate_limits": { 00:10:09.664 "rw_ios_per_sec": 0, 00:10:09.664 "rw_mbytes_per_sec": 0, 00:10:09.664 "r_mbytes_per_sec": 0, 00:10:09.664 "w_mbytes_per_sec": 0 00:10:09.664 }, 00:10:09.664 "claimed": false, 00:10:09.664 "zoned": false, 00:10:09.664 "supported_io_types": { 00:10:09.664 "read": true, 00:10:09.664 "write": true, 00:10:09.664 "unmap": true, 00:10:09.664 "write_zeroes": true, 00:10:09.664 "flush": true, 00:10:09.664 "reset": true, 00:10:09.664 "compare": false, 00:10:09.664 "compare_and_write": false, 00:10:09.664 "abort": false, 00:10:09.664 "nvme_admin": false, 00:10:09.664 "nvme_io": false 00:10:09.664 }, 00:10:09.664 "memory_domains": [ 00:10:09.664 { 00:10:09.664 "dma_device_id": "system", 00:10:09.664 "dma_device_type": 1 00:10:09.664 }, 00:10:09.664 { 00:10:09.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.664 "dma_device_type": 2 00:10:09.664 }, 00:10:09.664 { 00:10:09.664 "dma_device_id": "system", 00:10:09.664 "dma_device_type": 1 00:10:09.664 }, 00:10:09.664 { 00:10:09.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.664 "dma_device_type": 2 00:10:09.664 } 00:10:09.664 ], 00:10:09.664 "driver_specific": { 00:10:09.664 "raid": { 00:10:09.664 "uuid": "a1812911-25d2-4bbc-9e20-492b8853cf2f", 00:10:09.664 "strip_size_kb": 64, 00:10:09.664 "state": "online", 00:10:09.664 "raid_level": "raid0", 00:10:09.664 "superblock": true, 00:10:09.664 "num_base_bdevs": 2, 00:10:09.664 "num_base_bdevs_discovered": 2, 00:10:09.664 "num_base_bdevs_operational": 2, 00:10:09.664 "base_bdevs_list": [ 00:10:09.664 { 00:10:09.664 "name": "BaseBdev1", 00:10:09.664 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:09.664 "is_configured": true, 00:10:09.664 "data_offset": 2048, 00:10:09.664 "data_size": 63488 00:10:09.664 }, 00:10:09.664 { 00:10:09.664 "name": "BaseBdev2", 00:10:09.664 "uuid": "25543a0d-cc0d-4a21-8f41-1894ed2cecea", 00:10:09.664 "is_configured": true, 00:10:09.664 "data_offset": 2048, 00:10:09.664 "data_size": 63488 00:10:09.664 } 00:10:09.664 ] 00:10:09.664 } 00:10:09.664 } 00:10:09.664 }' 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:10:09.664 BaseBdev2' 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:09.664 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:09.922 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:09.922 "name": "BaseBdev1", 00:10:09.922 "aliases": [ 00:10:09.922 "0f316496-bb09-4363-9d1f-5f83950028f8" 00:10:09.922 ], 00:10:09.922 "product_name": "Malloc disk", 00:10:09.922 "block_size": 512, 00:10:09.922 "num_blocks": 65536, 00:10:09.922 "uuid": "0f316496-bb09-4363-9d1f-5f83950028f8", 00:10:09.922 "assigned_rate_limits": { 00:10:09.922 "rw_ios_per_sec": 0, 00:10:09.922 "rw_mbytes_per_sec": 0, 00:10:09.922 "r_mbytes_per_sec": 0, 00:10:09.922 "w_mbytes_per_sec": 0 00:10:09.922 }, 00:10:09.922 "claimed": true, 00:10:09.922 "claim_type": "exclusive_write", 00:10:09.922 "zoned": false, 00:10:09.922 "supported_io_types": { 00:10:09.922 "read": true, 00:10:09.922 "write": true, 00:10:09.922 "unmap": true, 00:10:09.922 "write_zeroes": true, 00:10:09.922 "flush": true, 00:10:09.922 "reset": true, 00:10:09.922 "compare": false, 00:10:09.922 "compare_and_write": false, 00:10:09.922 "abort": true, 00:10:09.922 "nvme_admin": false, 00:10:09.922 "nvme_io": false 00:10:09.922 }, 00:10:09.922 "memory_domains": [ 00:10:09.922 { 00:10:09.922 "dma_device_id": "system", 00:10:09.922 "dma_device_type": 1 00:10:09.922 }, 00:10:09.922 { 00:10:09.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.922 "dma_device_type": 2 00:10:09.922 } 00:10:09.922 ], 00:10:09.922 "driver_specific": {} 00:10:09.922 }' 00:10:09.922 03:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:09.922 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:09.922 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:09.922 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:10.180 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:10.439 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:10.439 "name": "BaseBdev2", 00:10:10.439 "aliases": [ 00:10:10.439 "25543a0d-cc0d-4a21-8f41-1894ed2cecea" 00:10:10.439 ], 00:10:10.439 "product_name": "Malloc disk", 00:10:10.439 "block_size": 512, 00:10:10.439 "num_blocks": 65536, 00:10:10.439 "uuid": "25543a0d-cc0d-4a21-8f41-1894ed2cecea", 00:10:10.439 "assigned_rate_limits": { 00:10:10.439 "rw_ios_per_sec": 0, 00:10:10.439 "rw_mbytes_per_sec": 0, 00:10:10.439 "r_mbytes_per_sec": 0, 00:10:10.439 "w_mbytes_per_sec": 0 00:10:10.439 }, 00:10:10.439 "claimed": true, 00:10:10.439 "claim_type": "exclusive_write", 00:10:10.439 "zoned": false, 00:10:10.439 "supported_io_types": { 00:10:10.439 "read": true, 00:10:10.439 "write": true, 00:10:10.439 "unmap": true, 00:10:10.439 "write_zeroes": true, 00:10:10.439 "flush": true, 00:10:10.439 "reset": true, 00:10:10.439 "compare": false, 00:10:10.439 "compare_and_write": false, 00:10:10.439 "abort": true, 00:10:10.439 "nvme_admin": false, 00:10:10.439 "nvme_io": false 00:10:10.439 }, 00:10:10.439 "memory_domains": [ 00:10:10.439 { 00:10:10.439 "dma_device_id": "system", 00:10:10.439 "dma_device_type": 1 00:10:10.439 }, 00:10:10.439 { 00:10:10.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.439 "dma_device_type": 2 00:10:10.439 } 00:10:10.439 ], 00:10:10.439 "driver_specific": {} 00:10:10.439 }' 00:10:10.439 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.696 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.954 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.954 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.954 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.954 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:10.954 03:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:11.212 [2024-05-15 03:04:42.187120] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.212 [2024-05-15 03:04:42.187149] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.212 [2024-05-15 03:04:42.187190] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:11.212 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.213 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.470 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:11.470 "name": "Existed_Raid", 00:10:11.470 "uuid": "a1812911-25d2-4bbc-9e20-492b8853cf2f", 00:10:11.470 "strip_size_kb": 64, 00:10:11.470 "state": "offline", 00:10:11.470 "raid_level": "raid0", 00:10:11.470 "superblock": true, 00:10:11.470 "num_base_bdevs": 2, 00:10:11.470 "num_base_bdevs_discovered": 1, 00:10:11.470 "num_base_bdevs_operational": 1, 00:10:11.470 "base_bdevs_list": [ 00:10:11.470 { 00:10:11.470 "name": null, 00:10:11.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.470 "is_configured": false, 00:10:11.470 "data_offset": 2048, 00:10:11.470 "data_size": 63488 00:10:11.470 }, 00:10:11.470 { 00:10:11.470 "name": "BaseBdev2", 00:10:11.470 "uuid": "25543a0d-cc0d-4a21-8f41-1894ed2cecea", 00:10:11.470 "is_configured": true, 00:10:11.470 "data_offset": 2048, 00:10:11.470 "data_size": 63488 00:10:11.470 } 00:10:11.470 ] 00:10:11.470 }' 00:10:11.470 03:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:11.470 03:04:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.036 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.036 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.036 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.036 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:12.295 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:12.295 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.295 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:12.553 [2024-05-15 03:04:43.564009] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.553 [2024-05-15 03:04:43.564062] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xac16b0 name Existed_Raid, state offline 00:10:12.553 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.553 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.553 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.553 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4042815 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4042815 ']' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4042815 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4042815 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4042815' 00:10:12.811 killing process with pid 4042815 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4042815 00:10:12.811 [2024-05-15 03:04:43.822708] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.811 03:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4042815 00:10:12.811 [2024-05-15 03:04:43.823585] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.069 03:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:10:13.069 00:10:13.069 real 0m10.466s 00:10:13.069 user 0m18.975s 00:10:13.069 sys 0m1.581s 00:10:13.069 03:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.069 03:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.069 ************************************ 00:10:13.069 END TEST raid_state_function_test_sb 00:10:13.069 ************************************ 00:10:13.069 03:04:44 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:13.069 03:04:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:13.069 03:04:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.069 03:04:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.069 ************************************ 00:10:13.069 START TEST raid_superblock_test 00:10:13.069 ************************************ 00:10:13.069 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4044800 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4044800 /var/tmp/spdk-raid.sock 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4044800 ']' 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:13.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:13.070 03:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 [2024-05-15 03:04:44.187098] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:13.070 [2024-05-15 03:04:44.187152] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044800 ] 00:10:13.328 [2024-05-15 03:04:44.284666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.328 [2024-05-15 03:04:44.378563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.328 [2024-05-15 03:04:44.437227] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.328 [2024-05-15 03:04:44.437270] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:14.263 malloc1 00:10:14.263 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.521 [2024-05-15 03:04:45.623690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.521 [2024-05-15 03:04:45.623735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.521 [2024-05-15 03:04:45.623754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dfea00 00:10:14.521 [2024-05-15 03:04:45.623764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.521 [2024-05-15 03:04:45.625470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.521 [2024-05-15 03:04:45.625497] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.521 pt1 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.521 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:14.780 malloc2 00:10:14.780 03:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.040 [2024-05-15 03:04:46.125695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.040 [2024-05-15 03:04:46.125735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.040 [2024-05-15 03:04:46.125755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dff5f0 00:10:15.040 [2024-05-15 03:04:46.125764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.040 [2024-05-15 03:04:46.127271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.040 [2024-05-15 03:04:46.127297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.040 pt2 00:10:15.040 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.040 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.040 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:15.299 [2024-05-15 03:04:46.374374] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.299 [2024-05-15 03:04:46.375723] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.299 [2024-05-15 03:04:46.375881] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fa4760 00:10:15.299 [2024-05-15 03:04:46.375894] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:15.299 [2024-05-15 03:04:46.376097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e158f0 00:10:15.299 [2024-05-15 03:04:46.376250] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fa4760 00:10:15.299 [2024-05-15 03:04:46.376259] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fa4760 00:10:15.299 [2024-05-15 03:04:46.376360] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.299 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.558 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:15.558 "name": "raid_bdev1", 00:10:15.558 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:15.558 "strip_size_kb": 64, 00:10:15.558 "state": "online", 00:10:15.558 "raid_level": "raid0", 00:10:15.558 "superblock": true, 00:10:15.559 "num_base_bdevs": 2, 00:10:15.559 "num_base_bdevs_discovered": 2, 00:10:15.559 "num_base_bdevs_operational": 2, 00:10:15.559 "base_bdevs_list": [ 00:10:15.559 { 00:10:15.559 "name": "pt1", 00:10:15.559 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:15.559 "is_configured": true, 00:10:15.559 "data_offset": 2048, 00:10:15.559 "data_size": 63488 00:10:15.559 }, 00:10:15.559 { 00:10:15.559 "name": "pt2", 00:10:15.559 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:15.559 "is_configured": true, 00:10:15.559 "data_offset": 2048, 00:10:15.559 "data_size": 63488 00:10:15.559 } 00:10:15.559 ] 00:10:15.559 }' 00:10:15.559 03:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:15.559 03:04:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:16.126 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:16.385 [2024-05-15 03:04:47.477536] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.385 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:16.385 "name": "raid_bdev1", 00:10:16.385 "aliases": [ 00:10:16.385 "8e4f39e7-f661-419a-ba88-5d1ad343a7df" 00:10:16.385 ], 00:10:16.385 "product_name": "Raid Volume", 00:10:16.385 "block_size": 512, 00:10:16.385 "num_blocks": 126976, 00:10:16.385 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:16.385 "assigned_rate_limits": { 00:10:16.385 "rw_ios_per_sec": 0, 00:10:16.385 "rw_mbytes_per_sec": 0, 00:10:16.385 "r_mbytes_per_sec": 0, 00:10:16.385 "w_mbytes_per_sec": 0 00:10:16.385 }, 00:10:16.385 "claimed": false, 00:10:16.385 "zoned": false, 00:10:16.385 "supported_io_types": { 00:10:16.385 "read": true, 00:10:16.385 "write": true, 00:10:16.385 "unmap": true, 00:10:16.385 "write_zeroes": true, 00:10:16.385 "flush": true, 00:10:16.385 "reset": true, 00:10:16.385 "compare": false, 00:10:16.385 "compare_and_write": false, 00:10:16.385 "abort": false, 00:10:16.385 "nvme_admin": false, 00:10:16.385 "nvme_io": false 00:10:16.385 }, 00:10:16.385 "memory_domains": [ 00:10:16.385 { 00:10:16.385 "dma_device_id": "system", 00:10:16.385 "dma_device_type": 1 00:10:16.385 }, 00:10:16.385 { 00:10:16.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.385 "dma_device_type": 2 00:10:16.385 }, 00:10:16.385 { 00:10:16.385 "dma_device_id": "system", 00:10:16.385 "dma_device_type": 1 00:10:16.385 }, 00:10:16.385 { 00:10:16.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.385 "dma_device_type": 2 00:10:16.385 } 00:10:16.385 ], 00:10:16.385 "driver_specific": { 00:10:16.385 "raid": { 00:10:16.385 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:16.385 "strip_size_kb": 64, 00:10:16.385 "state": "online", 00:10:16.385 "raid_level": "raid0", 00:10:16.385 "superblock": true, 00:10:16.385 "num_base_bdevs": 2, 00:10:16.385 "num_base_bdevs_discovered": 2, 00:10:16.385 "num_base_bdevs_operational": 2, 00:10:16.385 "base_bdevs_list": [ 00:10:16.385 { 00:10:16.385 "name": "pt1", 00:10:16.385 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:16.385 "is_configured": true, 00:10:16.385 "data_offset": 2048, 00:10:16.385 "data_size": 63488 00:10:16.385 }, 00:10:16.385 { 00:10:16.385 "name": "pt2", 00:10:16.385 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:16.385 "is_configured": true, 00:10:16.385 "data_offset": 2048, 00:10:16.385 "data_size": 63488 00:10:16.385 } 00:10:16.385 ] 00:10:16.385 } 00:10:16.385 } 00:10:16.385 }' 00:10:16.385 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.644 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:16.644 pt2' 00:10:16.644 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:16.644 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:16.644 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:16.903 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:16.903 "name": "pt1", 00:10:16.903 "aliases": [ 00:10:16.903 "c47b29de-e29c-57b1-9d6c-588ba2a0b50e" 00:10:16.903 ], 00:10:16.903 "product_name": "passthru", 00:10:16.903 "block_size": 512, 00:10:16.903 "num_blocks": 65536, 00:10:16.903 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:16.903 "assigned_rate_limits": { 00:10:16.903 "rw_ios_per_sec": 0, 00:10:16.903 "rw_mbytes_per_sec": 0, 00:10:16.903 "r_mbytes_per_sec": 0, 00:10:16.903 "w_mbytes_per_sec": 0 00:10:16.903 }, 00:10:16.903 "claimed": true, 00:10:16.903 "claim_type": "exclusive_write", 00:10:16.903 "zoned": false, 00:10:16.903 "supported_io_types": { 00:10:16.903 "read": true, 00:10:16.903 "write": true, 00:10:16.903 "unmap": true, 00:10:16.903 "write_zeroes": true, 00:10:16.903 "flush": true, 00:10:16.903 "reset": true, 00:10:16.903 "compare": false, 00:10:16.903 "compare_and_write": false, 00:10:16.903 "abort": true, 00:10:16.903 "nvme_admin": false, 00:10:16.903 "nvme_io": false 00:10:16.903 }, 00:10:16.903 "memory_domains": [ 00:10:16.903 { 00:10:16.903 "dma_device_id": "system", 00:10:16.903 "dma_device_type": 1 00:10:16.903 }, 00:10:16.903 { 00:10:16.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.903 "dma_device_type": 2 00:10:16.903 } 00:10:16.903 ], 00:10:16.903 "driver_specific": { 00:10:16.903 "passthru": { 00:10:16.903 "name": "pt1", 00:10:16.903 "base_bdev_name": "malloc1" 00:10:16.903 } 00:10:16.903 } 00:10:16.903 }' 00:10:16.903 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:16.903 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:16.903 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:16.904 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:16.904 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:16.904 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:16.904 03:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:16.904 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:17.162 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:17.421 "name": "pt2", 00:10:17.421 "aliases": [ 00:10:17.421 "4fd61d3c-ace6-5619-829e-807dd3277515" 00:10:17.421 ], 00:10:17.421 "product_name": "passthru", 00:10:17.421 "block_size": 512, 00:10:17.421 "num_blocks": 65536, 00:10:17.421 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:17.421 "assigned_rate_limits": { 00:10:17.421 "rw_ios_per_sec": 0, 00:10:17.421 "rw_mbytes_per_sec": 0, 00:10:17.421 "r_mbytes_per_sec": 0, 00:10:17.421 "w_mbytes_per_sec": 0 00:10:17.421 }, 00:10:17.421 "claimed": true, 00:10:17.421 "claim_type": "exclusive_write", 00:10:17.421 "zoned": false, 00:10:17.421 "supported_io_types": { 00:10:17.421 "read": true, 00:10:17.421 "write": true, 00:10:17.421 "unmap": true, 00:10:17.421 "write_zeroes": true, 00:10:17.421 "flush": true, 00:10:17.421 "reset": true, 00:10:17.421 "compare": false, 00:10:17.421 "compare_and_write": false, 00:10:17.421 "abort": true, 00:10:17.421 "nvme_admin": false, 00:10:17.421 "nvme_io": false 00:10:17.421 }, 00:10:17.421 "memory_domains": [ 00:10:17.421 { 00:10:17.421 "dma_device_id": "system", 00:10:17.421 "dma_device_type": 1 00:10:17.421 }, 00:10:17.421 { 00:10:17.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.421 "dma_device_type": 2 00:10:17.421 } 00:10:17.421 ], 00:10:17.421 "driver_specific": { 00:10:17.421 "passthru": { 00:10:17.421 "name": "pt2", 00:10:17.421 "base_bdev_name": "malloc2" 00:10:17.421 } 00:10:17.421 } 00:10:17.421 }' 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:17.421 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:17.680 03:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.939 [2024-05-15 03:04:49.021667] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.939 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8e4f39e7-f661-419a-ba88-5d1ad343a7df 00:10:17.939 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8e4f39e7-f661-419a-ba88-5d1ad343a7df ']' 00:10:17.939 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:18.197 [2024-05-15 03:04:49.274118] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.197 [2024-05-15 03:04:49.274136] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.197 [2024-05-15 03:04:49.274185] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.197 [2024-05-15 03:04:49.274227] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.197 [2024-05-15 03:04:49.274235] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fa4760 name raid_bdev1, state offline 00:10:18.197 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.197 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.455 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.455 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.455 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.455 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:18.712 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.712 03:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:18.971 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:18.971 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:19.230 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:19.489 [2024-05-15 03:04:50.545463] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.489 [2024-05-15 03:04:50.546886] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:19.489 [2024-05-15 03:04:50.546938] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:19.489 [2024-05-15 03:04:50.546976] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:19.489 [2024-05-15 03:04:50.546990] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.489 [2024-05-15 03:04:50.546998] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fa25c0 name raid_bdev1, state configuring 00:10:19.489 request: 00:10:19.489 { 00:10:19.489 "name": "raid_bdev1", 00:10:19.489 "raid_level": "raid0", 00:10:19.489 "base_bdevs": [ 00:10:19.489 "malloc1", 00:10:19.489 "malloc2" 00:10:19.489 ], 00:10:19.489 "superblock": false, 00:10:19.489 "strip_size_kb": 64, 00:10:19.489 "method": "bdev_raid_create", 00:10:19.489 "req_id": 1 00:10:19.489 } 00:10:19.489 Got JSON-RPC error response 00:10:19.489 response: 00:10:19.489 { 00:10:19.489 "code": -17, 00:10:19.489 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:19.489 } 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.489 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:19.747 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:19.747 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:19.747 03:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.036 [2024-05-15 03:04:51.058764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.036 [2024-05-15 03:04:51.058805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.036 [2024-05-15 03:04:51.058827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fa6200 00:10:20.036 [2024-05-15 03:04:51.058837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.036 [2024-05-15 03:04:51.060528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.036 [2024-05-15 03:04:51.060555] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.036 [2024-05-15 03:04:51.060616] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:20.036 [2024-05-15 03:04:51.060639] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.036 pt1 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.036 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.295 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:20.295 "name": "raid_bdev1", 00:10:20.295 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:20.295 "strip_size_kb": 64, 00:10:20.295 "state": "configuring", 00:10:20.295 "raid_level": "raid0", 00:10:20.295 "superblock": true, 00:10:20.295 "num_base_bdevs": 2, 00:10:20.295 "num_base_bdevs_discovered": 1, 00:10:20.295 "num_base_bdevs_operational": 2, 00:10:20.295 "base_bdevs_list": [ 00:10:20.295 { 00:10:20.295 "name": "pt1", 00:10:20.295 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:20.295 "is_configured": true, 00:10:20.295 "data_offset": 2048, 00:10:20.295 "data_size": 63488 00:10:20.295 }, 00:10:20.295 { 00:10:20.295 "name": null, 00:10:20.295 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:20.295 "is_configured": false, 00:10:20.295 "data_offset": 2048, 00:10:20.295 "data_size": 63488 00:10:20.295 } 00:10:20.295 ] 00:10:20.295 }' 00:10:20.295 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:20.295 03:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.861 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:20.861 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:20.861 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.861 03:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.120 [2024-05-15 03:04:52.177762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.120 [2024-05-15 03:04:52.177807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.120 [2024-05-15 03:04:52.177825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fa2320 00:10:21.120 [2024-05-15 03:04:52.177834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.120 [2024-05-15 03:04:52.178190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.120 [2024-05-15 03:04:52.178206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.120 [2024-05-15 03:04:52.178263] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:21.120 [2024-05-15 03:04:52.178280] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.120 [2024-05-15 03:04:52.178375] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fa6cc0 00:10:21.120 [2024-05-15 03:04:52.178384] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:21.120 [2024-05-15 03:04:52.178555] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e158f0 00:10:21.120 [2024-05-15 03:04:52.178682] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fa6cc0 00:10:21.120 [2024-05-15 03:04:52.178690] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fa6cc0 00:10:21.120 [2024-05-15 03:04:52.178787] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.120 pt2 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.120 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.379 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:21.379 "name": "raid_bdev1", 00:10:21.379 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:21.379 "strip_size_kb": 64, 00:10:21.379 "state": "online", 00:10:21.379 "raid_level": "raid0", 00:10:21.379 "superblock": true, 00:10:21.379 "num_base_bdevs": 2, 00:10:21.379 "num_base_bdevs_discovered": 2, 00:10:21.379 "num_base_bdevs_operational": 2, 00:10:21.379 "base_bdevs_list": [ 00:10:21.379 { 00:10:21.379 "name": "pt1", 00:10:21.379 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:21.379 "is_configured": true, 00:10:21.379 "data_offset": 2048, 00:10:21.379 "data_size": 63488 00:10:21.379 }, 00:10:21.379 { 00:10:21.379 "name": "pt2", 00:10:21.379 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:21.379 "is_configured": true, 00:10:21.379 "data_offset": 2048, 00:10:21.379 "data_size": 63488 00:10:21.379 } 00:10:21.379 ] 00:10:21.379 }' 00:10:21.379 03:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:21.379 03:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:21.945 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:22.203 [2024-05-15 03:04:53.188685] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:22.203 "name": "raid_bdev1", 00:10:22.203 "aliases": [ 00:10:22.203 "8e4f39e7-f661-419a-ba88-5d1ad343a7df" 00:10:22.203 ], 00:10:22.203 "product_name": "Raid Volume", 00:10:22.203 "block_size": 512, 00:10:22.203 "num_blocks": 126976, 00:10:22.203 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:22.203 "assigned_rate_limits": { 00:10:22.203 "rw_ios_per_sec": 0, 00:10:22.203 "rw_mbytes_per_sec": 0, 00:10:22.203 "r_mbytes_per_sec": 0, 00:10:22.203 "w_mbytes_per_sec": 0 00:10:22.203 }, 00:10:22.203 "claimed": false, 00:10:22.203 "zoned": false, 00:10:22.203 "supported_io_types": { 00:10:22.203 "read": true, 00:10:22.203 "write": true, 00:10:22.203 "unmap": true, 00:10:22.203 "write_zeroes": true, 00:10:22.203 "flush": true, 00:10:22.203 "reset": true, 00:10:22.203 "compare": false, 00:10:22.203 "compare_and_write": false, 00:10:22.203 "abort": false, 00:10:22.203 "nvme_admin": false, 00:10:22.203 "nvme_io": false 00:10:22.203 }, 00:10:22.203 "memory_domains": [ 00:10:22.203 { 00:10:22.203 "dma_device_id": "system", 00:10:22.203 "dma_device_type": 1 00:10:22.203 }, 00:10:22.203 { 00:10:22.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.203 "dma_device_type": 2 00:10:22.203 }, 00:10:22.203 { 00:10:22.203 "dma_device_id": "system", 00:10:22.203 "dma_device_type": 1 00:10:22.203 }, 00:10:22.203 { 00:10:22.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.203 "dma_device_type": 2 00:10:22.203 } 00:10:22.203 ], 00:10:22.203 "driver_specific": { 00:10:22.203 "raid": { 00:10:22.203 "uuid": "8e4f39e7-f661-419a-ba88-5d1ad343a7df", 00:10:22.203 "strip_size_kb": 64, 00:10:22.203 "state": "online", 00:10:22.203 "raid_level": "raid0", 00:10:22.203 "superblock": true, 00:10:22.203 "num_base_bdevs": 2, 00:10:22.203 "num_base_bdevs_discovered": 2, 00:10:22.203 "num_base_bdevs_operational": 2, 00:10:22.203 "base_bdevs_list": [ 00:10:22.203 { 00:10:22.203 "name": "pt1", 00:10:22.203 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:22.203 "is_configured": true, 00:10:22.203 "data_offset": 2048, 00:10:22.203 "data_size": 63488 00:10:22.203 }, 00:10:22.203 { 00:10:22.203 "name": "pt2", 00:10:22.203 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:22.203 "is_configured": true, 00:10:22.203 "data_offset": 2048, 00:10:22.203 "data_size": 63488 00:10:22.203 } 00:10:22.203 ] 00:10:22.203 } 00:10:22.203 } 00:10:22.203 }' 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:22.203 pt2' 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:22.203 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:22.468 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:22.469 "name": "pt1", 00:10:22.469 "aliases": [ 00:10:22.469 "c47b29de-e29c-57b1-9d6c-588ba2a0b50e" 00:10:22.469 ], 00:10:22.469 "product_name": "passthru", 00:10:22.469 "block_size": 512, 00:10:22.469 "num_blocks": 65536, 00:10:22.469 "uuid": "c47b29de-e29c-57b1-9d6c-588ba2a0b50e", 00:10:22.469 "assigned_rate_limits": { 00:10:22.469 "rw_ios_per_sec": 0, 00:10:22.469 "rw_mbytes_per_sec": 0, 00:10:22.469 "r_mbytes_per_sec": 0, 00:10:22.469 "w_mbytes_per_sec": 0 00:10:22.469 }, 00:10:22.469 "claimed": true, 00:10:22.469 "claim_type": "exclusive_write", 00:10:22.469 "zoned": false, 00:10:22.469 "supported_io_types": { 00:10:22.469 "read": true, 00:10:22.469 "write": true, 00:10:22.469 "unmap": true, 00:10:22.469 "write_zeroes": true, 00:10:22.469 "flush": true, 00:10:22.469 "reset": true, 00:10:22.469 "compare": false, 00:10:22.469 "compare_and_write": false, 00:10:22.469 "abort": true, 00:10:22.469 "nvme_admin": false, 00:10:22.469 "nvme_io": false 00:10:22.469 }, 00:10:22.469 "memory_domains": [ 00:10:22.469 { 00:10:22.469 "dma_device_id": "system", 00:10:22.469 "dma_device_type": 1 00:10:22.469 }, 00:10:22.469 { 00:10:22.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.469 "dma_device_type": 2 00:10:22.469 } 00:10:22.469 ], 00:10:22.469 "driver_specific": { 00:10:22.469 "passthru": { 00:10:22.469 "name": "pt1", 00:10:22.469 "base_bdev_name": "malloc1" 00:10:22.469 } 00:10:22.469 } 00:10:22.469 }' 00:10:22.469 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.469 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.469 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:22.469 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:22.731 03:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:22.989 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:22.989 "name": "pt2", 00:10:22.989 "aliases": [ 00:10:22.989 "4fd61d3c-ace6-5619-829e-807dd3277515" 00:10:22.989 ], 00:10:22.989 "product_name": "passthru", 00:10:22.989 "block_size": 512, 00:10:22.989 "num_blocks": 65536, 00:10:22.989 "uuid": "4fd61d3c-ace6-5619-829e-807dd3277515", 00:10:22.989 "assigned_rate_limits": { 00:10:22.989 "rw_ios_per_sec": 0, 00:10:22.989 "rw_mbytes_per_sec": 0, 00:10:22.989 "r_mbytes_per_sec": 0, 00:10:22.989 "w_mbytes_per_sec": 0 00:10:22.989 }, 00:10:22.989 "claimed": true, 00:10:22.989 "claim_type": "exclusive_write", 00:10:22.989 "zoned": false, 00:10:22.989 "supported_io_types": { 00:10:22.989 "read": true, 00:10:22.989 "write": true, 00:10:22.989 "unmap": true, 00:10:22.989 "write_zeroes": true, 00:10:22.989 "flush": true, 00:10:22.989 "reset": true, 00:10:22.989 "compare": false, 00:10:22.989 "compare_and_write": false, 00:10:22.989 "abort": true, 00:10:22.989 "nvme_admin": false, 00:10:22.989 "nvme_io": false 00:10:22.989 }, 00:10:22.989 "memory_domains": [ 00:10:22.989 { 00:10:22.989 "dma_device_id": "system", 00:10:22.989 "dma_device_type": 1 00:10:22.989 }, 00:10:22.989 { 00:10:22.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.989 "dma_device_type": 2 00:10:22.989 } 00:10:22.989 ], 00:10:22.989 "driver_specific": { 00:10:22.989 "passthru": { 00:10:22.989 "name": "pt2", 00:10:22.989 "base_bdev_name": "malloc2" 00:10:22.989 } 00:10:22.989 } 00:10:22.989 }' 00:10:22.989 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.248 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.506 [2024-05-15 03:04:54.588439] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8e4f39e7-f661-419a-ba88-5d1ad343a7df '!=' 8e4f39e7-f661-419a-ba88-5d1ad343a7df ']' 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4044800 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4044800 ']' 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4044800 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4044800 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4044800' 00:10:23.506 killing process with pid 4044800 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4044800 00:10:23.506 [2024-05-15 03:04:54.660533] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.506 [2024-05-15 03:04:54.660590] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.506 [2024-05-15 03:04:54.660629] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.506 [2024-05-15 03:04:54.660642] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fa6cc0 name raid_bdev1, state offline 00:10:23.506 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4044800 00:10:23.765 [2024-05-15 03:04:54.676746] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.765 03:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:10:23.765 00:10:23.765 real 0m10.769s 00:10:23.765 user 0m19.698s 00:10:23.765 sys 0m1.544s 00:10:23.765 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.765 03:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.765 ************************************ 00:10:23.765 END TEST raid_superblock_test 00:10:23.765 ************************************ 00:10:24.024 03:04:54 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:10:24.024 03:04:54 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:24.024 03:04:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:24.024 03:04:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.024 03:04:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.024 ************************************ 00:10:24.024 START TEST raid_state_function_test 00:10:24.024 ************************************ 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4046715 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4046715' 00:10:24.024 Process raid pid: 4046715 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4046715 /var/tmp/spdk-raid.sock 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4046715 ']' 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:24.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:24.024 03:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.024 [2024-05-15 03:04:55.037510] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:24.024 [2024-05-15 03:04:55.037563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.024 [2024-05-15 03:04:55.138231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.282 [2024-05-15 03:04:55.233161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.282 [2024-05-15 03:04:55.299962] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.282 [2024-05-15 03:04:55.299996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.849 03:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:24.849 03:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:10:24.849 03:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:25.107 [2024-05-15 03:04:56.216632] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.107 [2024-05-15 03:04:56.216673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.107 [2024-05-15 03:04:56.216682] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.107 [2024-05-15 03:04:56.216692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.107 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.366 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.366 "name": "Existed_Raid", 00:10:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.366 "strip_size_kb": 64, 00:10:25.366 "state": "configuring", 00:10:25.366 "raid_level": "concat", 00:10:25.366 "superblock": false, 00:10:25.366 "num_base_bdevs": 2, 00:10:25.366 "num_base_bdevs_discovered": 0, 00:10:25.366 "num_base_bdevs_operational": 2, 00:10:25.366 "base_bdevs_list": [ 00:10:25.366 { 00:10:25.366 "name": "BaseBdev1", 00:10:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.366 "is_configured": false, 00:10:25.366 "data_offset": 0, 00:10:25.366 "data_size": 0 00:10:25.366 }, 00:10:25.366 { 00:10:25.366 "name": "BaseBdev2", 00:10:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.366 "is_configured": false, 00:10:25.366 "data_offset": 0, 00:10:25.366 "data_size": 0 00:10:25.366 } 00:10:25.366 ] 00:10:25.366 }' 00:10:25.366 03:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.366 03:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.300 03:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:26.300 [2024-05-15 03:04:57.339496] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.300 [2024-05-15 03:04:57.339524] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1819dc0 name Existed_Raid, state configuring 00:10:26.300 03:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:26.558 [2024-05-15 03:04:57.596193] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.558 [2024-05-15 03:04:57.596218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.558 [2024-05-15 03:04:57.596226] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.558 [2024-05-15 03:04:57.596235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.558 03:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.816 [2024-05-15 03:04:57.858334] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.816 BaseBdev1 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:26.816 03:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.074 03:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.331 [ 00:10:27.331 { 00:10:27.331 "name": "BaseBdev1", 00:10:27.331 "aliases": [ 00:10:27.331 "caff9b40-f4b0-4828-99dd-fd12f74294af" 00:10:27.331 ], 00:10:27.331 "product_name": "Malloc disk", 00:10:27.331 "block_size": 512, 00:10:27.331 "num_blocks": 65536, 00:10:27.331 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:27.331 "assigned_rate_limits": { 00:10:27.331 "rw_ios_per_sec": 0, 00:10:27.331 "rw_mbytes_per_sec": 0, 00:10:27.332 "r_mbytes_per_sec": 0, 00:10:27.332 "w_mbytes_per_sec": 0 00:10:27.332 }, 00:10:27.332 "claimed": true, 00:10:27.332 "claim_type": "exclusive_write", 00:10:27.332 "zoned": false, 00:10:27.332 "supported_io_types": { 00:10:27.332 "read": true, 00:10:27.332 "write": true, 00:10:27.332 "unmap": true, 00:10:27.332 "write_zeroes": true, 00:10:27.332 "flush": true, 00:10:27.332 "reset": true, 00:10:27.332 "compare": false, 00:10:27.332 "compare_and_write": false, 00:10:27.332 "abort": true, 00:10:27.332 "nvme_admin": false, 00:10:27.332 "nvme_io": false 00:10:27.332 }, 00:10:27.332 "memory_domains": [ 00:10:27.332 { 00:10:27.332 "dma_device_id": "system", 00:10:27.332 "dma_device_type": 1 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.332 "dma_device_type": 2 00:10:27.332 } 00:10:27.332 ], 00:10:27.332 "driver_specific": {} 00:10:27.332 } 00:10:27.332 ] 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.332 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.590 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:27.590 "name": "Existed_Raid", 00:10:27.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.590 "strip_size_kb": 64, 00:10:27.590 "state": "configuring", 00:10:27.590 "raid_level": "concat", 00:10:27.590 "superblock": false, 00:10:27.590 "num_base_bdevs": 2, 00:10:27.590 "num_base_bdevs_discovered": 1, 00:10:27.590 "num_base_bdevs_operational": 2, 00:10:27.590 "base_bdevs_list": [ 00:10:27.590 { 00:10:27.590 "name": "BaseBdev1", 00:10:27.590 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:27.590 "is_configured": true, 00:10:27.590 "data_offset": 0, 00:10:27.590 "data_size": 65536 00:10:27.590 }, 00:10:27.590 { 00:10:27.590 "name": "BaseBdev2", 00:10:27.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.590 "is_configured": false, 00:10:27.590 "data_offset": 0, 00:10:27.590 "data_size": 0 00:10:27.590 } 00:10:27.590 ] 00:10:27.590 }' 00:10:27.590 03:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:27.590 03:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.156 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.414 [2024-05-15 03:04:59.382419] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.414 [2024-05-15 03:04:59.382460] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x181a060 name Existed_Raid, state configuring 00:10:28.414 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:28.672 [2024-05-15 03:04:59.639122] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.672 [2024-05-15 03:04:59.640645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.672 [2024-05-15 03:04:59.640674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.672 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.931 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:28.931 "name": "Existed_Raid", 00:10:28.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.931 "strip_size_kb": 64, 00:10:28.931 "state": "configuring", 00:10:28.931 "raid_level": "concat", 00:10:28.931 "superblock": false, 00:10:28.931 "num_base_bdevs": 2, 00:10:28.931 "num_base_bdevs_discovered": 1, 00:10:28.931 "num_base_bdevs_operational": 2, 00:10:28.931 "base_bdevs_list": [ 00:10:28.931 { 00:10:28.931 "name": "BaseBdev1", 00:10:28.931 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:28.931 "is_configured": true, 00:10:28.931 "data_offset": 0, 00:10:28.931 "data_size": 65536 00:10:28.931 }, 00:10:28.931 { 00:10:28.931 "name": "BaseBdev2", 00:10:28.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.931 "is_configured": false, 00:10:28.931 "data_offset": 0, 00:10:28.931 "data_size": 0 00:10:28.931 } 00:10:28.931 ] 00:10:28.931 }' 00:10:28.931 03:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:28.931 03:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.497 03:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.755 [2024-05-15 03:05:00.757331] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.755 [2024-05-15 03:05:00.757364] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x18196b0 00:10:29.755 [2024-05-15 03:05:00.757371] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:29.755 [2024-05-15 03:05:00.757561] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1819c70 00:10:29.755 [2024-05-15 03:05:00.757682] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18196b0 00:10:29.755 [2024-05-15 03:05:00.757690] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18196b0 00:10:29.756 [2024-05-15 03:05:00.757864] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.756 BaseBdev2 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:29.756 03:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.013 03:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.272 [ 00:10:30.272 { 00:10:30.272 "name": "BaseBdev2", 00:10:30.272 "aliases": [ 00:10:30.272 "675474ef-6dc2-4353-a6b1-5b956afcc5c1" 00:10:30.272 ], 00:10:30.272 "product_name": "Malloc disk", 00:10:30.272 "block_size": 512, 00:10:30.272 "num_blocks": 65536, 00:10:30.272 "uuid": "675474ef-6dc2-4353-a6b1-5b956afcc5c1", 00:10:30.272 "assigned_rate_limits": { 00:10:30.272 "rw_ios_per_sec": 0, 00:10:30.272 "rw_mbytes_per_sec": 0, 00:10:30.272 "r_mbytes_per_sec": 0, 00:10:30.272 "w_mbytes_per_sec": 0 00:10:30.272 }, 00:10:30.272 "claimed": true, 00:10:30.272 "claim_type": "exclusive_write", 00:10:30.272 "zoned": false, 00:10:30.272 "supported_io_types": { 00:10:30.272 "read": true, 00:10:30.272 "write": true, 00:10:30.272 "unmap": true, 00:10:30.272 "write_zeroes": true, 00:10:30.272 "flush": true, 00:10:30.272 "reset": true, 00:10:30.272 "compare": false, 00:10:30.272 "compare_and_write": false, 00:10:30.272 "abort": true, 00:10:30.272 "nvme_admin": false, 00:10:30.272 "nvme_io": false 00:10:30.272 }, 00:10:30.272 "memory_domains": [ 00:10:30.272 { 00:10:30.272 "dma_device_id": "system", 00:10:30.272 "dma_device_type": 1 00:10:30.272 }, 00:10:30.272 { 00:10:30.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.272 "dma_device_type": 2 00:10:30.272 } 00:10:30.272 ], 00:10:30.272 "driver_specific": {} 00:10:30.272 } 00:10:30.272 ] 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.272 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.530 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:30.530 "name": "Existed_Raid", 00:10:30.530 "uuid": "d0ef380d-4333-4023-a4c8-cc0fcdcf4ebc", 00:10:30.530 "strip_size_kb": 64, 00:10:30.530 "state": "online", 00:10:30.530 "raid_level": "concat", 00:10:30.530 "superblock": false, 00:10:30.530 "num_base_bdevs": 2, 00:10:30.530 "num_base_bdevs_discovered": 2, 00:10:30.530 "num_base_bdevs_operational": 2, 00:10:30.530 "base_bdevs_list": [ 00:10:30.530 { 00:10:30.530 "name": "BaseBdev1", 00:10:30.530 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:30.530 "is_configured": true, 00:10:30.530 "data_offset": 0, 00:10:30.530 "data_size": 65536 00:10:30.530 }, 00:10:30.530 { 00:10:30.530 "name": "BaseBdev2", 00:10:30.530 "uuid": "675474ef-6dc2-4353-a6b1-5b956afcc5c1", 00:10:30.530 "is_configured": true, 00:10:30.530 "data_offset": 0, 00:10:30.530 "data_size": 65536 00:10:30.530 } 00:10:30.530 ] 00:10:30.530 }' 00:10:30.530 03:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:30.530 03:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:31.094 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:31.352 [2024-05-15 03:05:02.389968] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:31.352 "name": "Existed_Raid", 00:10:31.352 "aliases": [ 00:10:31.352 "d0ef380d-4333-4023-a4c8-cc0fcdcf4ebc" 00:10:31.352 ], 00:10:31.352 "product_name": "Raid Volume", 00:10:31.352 "block_size": 512, 00:10:31.352 "num_blocks": 131072, 00:10:31.352 "uuid": "d0ef380d-4333-4023-a4c8-cc0fcdcf4ebc", 00:10:31.352 "assigned_rate_limits": { 00:10:31.352 "rw_ios_per_sec": 0, 00:10:31.352 "rw_mbytes_per_sec": 0, 00:10:31.352 "r_mbytes_per_sec": 0, 00:10:31.352 "w_mbytes_per_sec": 0 00:10:31.352 }, 00:10:31.352 "claimed": false, 00:10:31.352 "zoned": false, 00:10:31.352 "supported_io_types": { 00:10:31.352 "read": true, 00:10:31.352 "write": true, 00:10:31.352 "unmap": true, 00:10:31.352 "write_zeroes": true, 00:10:31.352 "flush": true, 00:10:31.352 "reset": true, 00:10:31.352 "compare": false, 00:10:31.352 "compare_and_write": false, 00:10:31.352 "abort": false, 00:10:31.352 "nvme_admin": false, 00:10:31.352 "nvme_io": false 00:10:31.352 }, 00:10:31.352 "memory_domains": [ 00:10:31.352 { 00:10:31.352 "dma_device_id": "system", 00:10:31.352 "dma_device_type": 1 00:10:31.352 }, 00:10:31.352 { 00:10:31.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.352 "dma_device_type": 2 00:10:31.352 }, 00:10:31.352 { 00:10:31.352 "dma_device_id": "system", 00:10:31.352 "dma_device_type": 1 00:10:31.352 }, 00:10:31.352 { 00:10:31.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.352 "dma_device_type": 2 00:10:31.352 } 00:10:31.352 ], 00:10:31.352 "driver_specific": { 00:10:31.352 "raid": { 00:10:31.352 "uuid": "d0ef380d-4333-4023-a4c8-cc0fcdcf4ebc", 00:10:31.352 "strip_size_kb": 64, 00:10:31.352 "state": "online", 00:10:31.352 "raid_level": "concat", 00:10:31.352 "superblock": false, 00:10:31.352 "num_base_bdevs": 2, 00:10:31.352 "num_base_bdevs_discovered": 2, 00:10:31.352 "num_base_bdevs_operational": 2, 00:10:31.352 "base_bdevs_list": [ 00:10:31.352 { 00:10:31.352 "name": "BaseBdev1", 00:10:31.352 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:31.352 "is_configured": true, 00:10:31.352 "data_offset": 0, 00:10:31.352 "data_size": 65536 00:10:31.352 }, 00:10:31.352 { 00:10:31.352 "name": "BaseBdev2", 00:10:31.352 "uuid": "675474ef-6dc2-4353-a6b1-5b956afcc5c1", 00:10:31.352 "is_configured": true, 00:10:31.352 "data_offset": 0, 00:10:31.352 "data_size": 65536 00:10:31.352 } 00:10:31.352 ] 00:10:31.352 } 00:10:31.352 } 00:10:31.352 }' 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:10:31.352 BaseBdev2' 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:31.352 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:31.609 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:31.609 "name": "BaseBdev1", 00:10:31.609 "aliases": [ 00:10:31.609 "caff9b40-f4b0-4828-99dd-fd12f74294af" 00:10:31.609 ], 00:10:31.609 "product_name": "Malloc disk", 00:10:31.609 "block_size": 512, 00:10:31.609 "num_blocks": 65536, 00:10:31.609 "uuid": "caff9b40-f4b0-4828-99dd-fd12f74294af", 00:10:31.609 "assigned_rate_limits": { 00:10:31.609 "rw_ios_per_sec": 0, 00:10:31.609 "rw_mbytes_per_sec": 0, 00:10:31.609 "r_mbytes_per_sec": 0, 00:10:31.609 "w_mbytes_per_sec": 0 00:10:31.609 }, 00:10:31.609 "claimed": true, 00:10:31.609 "claim_type": "exclusive_write", 00:10:31.609 "zoned": false, 00:10:31.609 "supported_io_types": { 00:10:31.609 "read": true, 00:10:31.610 "write": true, 00:10:31.610 "unmap": true, 00:10:31.610 "write_zeroes": true, 00:10:31.610 "flush": true, 00:10:31.610 "reset": true, 00:10:31.610 "compare": false, 00:10:31.610 "compare_and_write": false, 00:10:31.610 "abort": true, 00:10:31.610 "nvme_admin": false, 00:10:31.610 "nvme_io": false 00:10:31.610 }, 00:10:31.610 "memory_domains": [ 00:10:31.610 { 00:10:31.610 "dma_device_id": "system", 00:10:31.610 "dma_device_type": 1 00:10:31.610 }, 00:10:31.610 { 00:10:31.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.610 "dma_device_type": 2 00:10:31.610 } 00:10:31.610 ], 00:10:31.610 "driver_specific": {} 00:10:31.610 }' 00:10:31.610 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:31.610 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.867 03:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:31.867 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:32.125 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:32.125 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:32.125 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:32.125 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:32.383 "name": "BaseBdev2", 00:10:32.383 "aliases": [ 00:10:32.383 "675474ef-6dc2-4353-a6b1-5b956afcc5c1" 00:10:32.383 ], 00:10:32.383 "product_name": "Malloc disk", 00:10:32.383 "block_size": 512, 00:10:32.383 "num_blocks": 65536, 00:10:32.383 "uuid": "675474ef-6dc2-4353-a6b1-5b956afcc5c1", 00:10:32.383 "assigned_rate_limits": { 00:10:32.383 "rw_ios_per_sec": 0, 00:10:32.383 "rw_mbytes_per_sec": 0, 00:10:32.383 "r_mbytes_per_sec": 0, 00:10:32.383 "w_mbytes_per_sec": 0 00:10:32.383 }, 00:10:32.383 "claimed": true, 00:10:32.383 "claim_type": "exclusive_write", 00:10:32.383 "zoned": false, 00:10:32.383 "supported_io_types": { 00:10:32.383 "read": true, 00:10:32.383 "write": true, 00:10:32.383 "unmap": true, 00:10:32.383 "write_zeroes": true, 00:10:32.383 "flush": true, 00:10:32.383 "reset": true, 00:10:32.383 "compare": false, 00:10:32.383 "compare_and_write": false, 00:10:32.383 "abort": true, 00:10:32.383 "nvme_admin": false, 00:10:32.383 "nvme_io": false 00:10:32.383 }, 00:10:32.383 "memory_domains": [ 00:10:32.383 { 00:10:32.383 "dma_device_id": "system", 00:10:32.383 "dma_device_type": 1 00:10:32.383 }, 00:10:32.383 { 00:10:32.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.383 "dma_device_type": 2 00:10:32.383 } 00:10:32.383 ], 00:10:32.383 "driver_specific": {} 00:10:32.383 }' 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:32.383 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:32.641 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.641 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:32.641 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:32.641 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:32.641 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:32.899 [2024-05-15 03:05:03.913871] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.899 [2024-05-15 03:05:03.913896] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.899 [2024-05-15 03:05:03.913935] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.899 03:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.157 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.157 "name": "Existed_Raid", 00:10:33.157 "uuid": "d0ef380d-4333-4023-a4c8-cc0fcdcf4ebc", 00:10:33.157 "strip_size_kb": 64, 00:10:33.157 "state": "offline", 00:10:33.157 "raid_level": "concat", 00:10:33.157 "superblock": false, 00:10:33.157 "num_base_bdevs": 2, 00:10:33.157 "num_base_bdevs_discovered": 1, 00:10:33.157 "num_base_bdevs_operational": 1, 00:10:33.157 "base_bdevs_list": [ 00:10:33.157 { 00:10:33.157 "name": null, 00:10:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.157 "is_configured": false, 00:10:33.157 "data_offset": 0, 00:10:33.157 "data_size": 65536 00:10:33.157 }, 00:10:33.157 { 00:10:33.157 "name": "BaseBdev2", 00:10:33.157 "uuid": "675474ef-6dc2-4353-a6b1-5b956afcc5c1", 00:10:33.157 "is_configured": true, 00:10:33.157 "data_offset": 0, 00:10:33.157 "data_size": 65536 00:10:33.157 } 00:10:33.157 ] 00:10:33.157 }' 00:10:33.157 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.157 03:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.722 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:33.722 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.722 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.722 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:33.980 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:33.980 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.980 03:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:34.270 [2024-05-15 03:05:05.222576] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.270 [2024-05-15 03:05:05.222627] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18196b0 name Existed_Raid, state offline 00:10:34.270 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.270 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.270 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.270 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4046715 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4046715 ']' 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4046715 00:10:34.528 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4046715 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4046715' 00:10:34.529 killing process with pid 4046715 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4046715 00:10:34.529 [2024-05-15 03:05:05.551119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.529 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4046715 00:10:34.529 [2024-05-15 03:05:05.552005] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.787 03:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:10:34.787 00:10:34.787 real 0m10.801s 00:10:34.787 user 0m19.682s 00:10:34.787 sys 0m1.534s 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.788 ************************************ 00:10:34.788 END TEST raid_state_function_test 00:10:34.788 ************************************ 00:10:34.788 03:05:05 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:34.788 03:05:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:34.788 03:05:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:34.788 03:05:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.788 ************************************ 00:10:34.788 START TEST raid_state_function_test_sb 00:10:34.788 ************************************ 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4048724 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4048724' 00:10:34.788 Process raid pid: 4048724 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4048724 /var/tmp/spdk-raid.sock 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4048724 ']' 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:34.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:34.788 03:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.788 [2024-05-15 03:05:05.913455] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:34.788 [2024-05-15 03:05:05.913510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.046 [2024-05-15 03:05:06.014138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.046 [2024-05-15 03:05:06.107163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.046 [2024-05-15 03:05:06.167454] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.046 [2024-05-15 03:05:06.167486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.980 03:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:35.980 03:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:10:35.980 03:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:35.980 [2024-05-15 03:05:07.098587] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.980 [2024-05-15 03:05:07.098625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.980 [2024-05-15 03:05:07.098634] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.980 [2024-05-15 03:05:07.098644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.980 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.239 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:36.239 "name": "Existed_Raid", 00:10:36.239 "uuid": "23525385-53f7-44b7-b596-c8e6919d1d64", 00:10:36.239 "strip_size_kb": 64, 00:10:36.239 "state": "configuring", 00:10:36.239 "raid_level": "concat", 00:10:36.239 "superblock": true, 00:10:36.239 "num_base_bdevs": 2, 00:10:36.239 "num_base_bdevs_discovered": 0, 00:10:36.239 "num_base_bdevs_operational": 2, 00:10:36.239 "base_bdevs_list": [ 00:10:36.239 { 00:10:36.239 "name": "BaseBdev1", 00:10:36.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.239 "is_configured": false, 00:10:36.239 "data_offset": 0, 00:10:36.239 "data_size": 0 00:10:36.239 }, 00:10:36.239 { 00:10:36.239 "name": "BaseBdev2", 00:10:36.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.239 "is_configured": false, 00:10:36.239 "data_offset": 0, 00:10:36.239 "data_size": 0 00:10:36.239 } 00:10:36.239 ] 00:10:36.239 }' 00:10:36.239 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:36.239 03:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.173 03:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:37.173 [2024-05-15 03:05:08.209411] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.173 [2024-05-15 03:05:08.209438] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x219fdc0 name Existed_Raid, state configuring 00:10:37.173 03:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:37.431 [2024-05-15 03:05:08.458095] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.431 [2024-05-15 03:05:08.458121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.431 [2024-05-15 03:05:08.458129] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.431 [2024-05-15 03:05:08.458138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.431 03:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.689 [2024-05-15 03:05:08.716322] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.689 BaseBdev1 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:37.689 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:37.947 03:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.205 [ 00:10:38.205 { 00:10:38.205 "name": "BaseBdev1", 00:10:38.205 "aliases": [ 00:10:38.205 "5f449975-2b6e-4e2b-9424-a451ea0e210c" 00:10:38.205 ], 00:10:38.205 "product_name": "Malloc disk", 00:10:38.205 "block_size": 512, 00:10:38.205 "num_blocks": 65536, 00:10:38.205 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:38.205 "assigned_rate_limits": { 00:10:38.205 "rw_ios_per_sec": 0, 00:10:38.205 "rw_mbytes_per_sec": 0, 00:10:38.205 "r_mbytes_per_sec": 0, 00:10:38.205 "w_mbytes_per_sec": 0 00:10:38.205 }, 00:10:38.205 "claimed": true, 00:10:38.205 "claim_type": "exclusive_write", 00:10:38.205 "zoned": false, 00:10:38.205 "supported_io_types": { 00:10:38.205 "read": true, 00:10:38.205 "write": true, 00:10:38.205 "unmap": true, 00:10:38.205 "write_zeroes": true, 00:10:38.205 "flush": true, 00:10:38.205 "reset": true, 00:10:38.205 "compare": false, 00:10:38.205 "compare_and_write": false, 00:10:38.205 "abort": true, 00:10:38.205 "nvme_admin": false, 00:10:38.205 "nvme_io": false 00:10:38.205 }, 00:10:38.205 "memory_domains": [ 00:10:38.205 { 00:10:38.205 "dma_device_id": "system", 00:10:38.205 "dma_device_type": 1 00:10:38.205 }, 00:10:38.205 { 00:10:38.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.205 "dma_device_type": 2 00:10:38.205 } 00:10:38.205 ], 00:10:38.205 "driver_specific": {} 00:10:38.205 } 00:10:38.205 ] 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.205 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.464 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:38.464 "name": "Existed_Raid", 00:10:38.464 "uuid": "b29b97f9-0ad3-49b2-8e44-f1bc91920443", 00:10:38.464 "strip_size_kb": 64, 00:10:38.464 "state": "configuring", 00:10:38.464 "raid_level": "concat", 00:10:38.464 "superblock": true, 00:10:38.464 "num_base_bdevs": 2, 00:10:38.464 "num_base_bdevs_discovered": 1, 00:10:38.464 "num_base_bdevs_operational": 2, 00:10:38.464 "base_bdevs_list": [ 00:10:38.464 { 00:10:38.464 "name": "BaseBdev1", 00:10:38.464 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:38.464 "is_configured": true, 00:10:38.464 "data_offset": 2048, 00:10:38.464 "data_size": 63488 00:10:38.464 }, 00:10:38.464 { 00:10:38.464 "name": "BaseBdev2", 00:10:38.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.464 "is_configured": false, 00:10:38.464 "data_offset": 0, 00:10:38.464 "data_size": 0 00:10:38.464 } 00:10:38.464 ] 00:10:38.464 }' 00:10:38.464 03:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:38.464 03:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.030 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:39.288 [2024-05-15 03:05:10.336677] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.288 [2024-05-15 03:05:10.336718] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21a0060 name Existed_Raid, state configuring 00:10:39.288 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:39.546 [2024-05-15 03:05:10.593383] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.546 [2024-05-15 03:05:10.594911] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.546 [2024-05-15 03:05:10.594940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.546 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.804 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:39.804 "name": "Existed_Raid", 00:10:39.804 "uuid": "d79a63d5-2b34-41f2-8247-8fd8da10006c", 00:10:39.804 "strip_size_kb": 64, 00:10:39.804 "state": "configuring", 00:10:39.804 "raid_level": "concat", 00:10:39.804 "superblock": true, 00:10:39.804 "num_base_bdevs": 2, 00:10:39.804 "num_base_bdevs_discovered": 1, 00:10:39.804 "num_base_bdevs_operational": 2, 00:10:39.804 "base_bdevs_list": [ 00:10:39.804 { 00:10:39.804 "name": "BaseBdev1", 00:10:39.804 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:39.804 "is_configured": true, 00:10:39.804 "data_offset": 2048, 00:10:39.804 "data_size": 63488 00:10:39.804 }, 00:10:39.804 { 00:10:39.804 "name": "BaseBdev2", 00:10:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.804 "is_configured": false, 00:10:39.804 "data_offset": 0, 00:10:39.804 "data_size": 0 00:10:39.804 } 00:10:39.804 ] 00:10:39.804 }' 00:10:39.804 03:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:39.804 03:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.370 03:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.627 [2024-05-15 03:05:11.735616] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.627 [2024-05-15 03:05:11.735761] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x219f6b0 00:10:40.627 [2024-05-15 03:05:11.735774] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:40.627 [2024-05-15 03:05:11.735967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x219fc70 00:10:40.627 [2024-05-15 03:05:11.736091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x219f6b0 00:10:40.627 [2024-05-15 03:05:11.736099] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x219f6b0 00:10:40.627 [2024-05-15 03:05:11.736195] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.627 BaseBdev2 00:10:40.627 03:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:40.628 03:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.885 03:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.144 [ 00:10:41.144 { 00:10:41.144 "name": "BaseBdev2", 00:10:41.144 "aliases": [ 00:10:41.144 "8beda1ac-bdc1-47fb-930c-7d21d47f41db" 00:10:41.144 ], 00:10:41.144 "product_name": "Malloc disk", 00:10:41.144 "block_size": 512, 00:10:41.144 "num_blocks": 65536, 00:10:41.144 "uuid": "8beda1ac-bdc1-47fb-930c-7d21d47f41db", 00:10:41.144 "assigned_rate_limits": { 00:10:41.144 "rw_ios_per_sec": 0, 00:10:41.144 "rw_mbytes_per_sec": 0, 00:10:41.144 "r_mbytes_per_sec": 0, 00:10:41.144 "w_mbytes_per_sec": 0 00:10:41.144 }, 00:10:41.144 "claimed": true, 00:10:41.144 "claim_type": "exclusive_write", 00:10:41.144 "zoned": false, 00:10:41.144 "supported_io_types": { 00:10:41.144 "read": true, 00:10:41.144 "write": true, 00:10:41.144 "unmap": true, 00:10:41.144 "write_zeroes": true, 00:10:41.144 "flush": true, 00:10:41.144 "reset": true, 00:10:41.144 "compare": false, 00:10:41.144 "compare_and_write": false, 00:10:41.144 "abort": true, 00:10:41.144 "nvme_admin": false, 00:10:41.144 "nvme_io": false 00:10:41.144 }, 00:10:41.144 "memory_domains": [ 00:10:41.144 { 00:10:41.144 "dma_device_id": "system", 00:10:41.144 "dma_device_type": 1 00:10:41.144 }, 00:10:41.144 { 00:10:41.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.144 "dma_device_type": 2 00:10:41.144 } 00:10:41.144 ], 00:10:41.144 "driver_specific": {} 00:10:41.144 } 00:10:41.144 ] 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.144 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.402 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:41.402 "name": "Existed_Raid", 00:10:41.402 "uuid": "d79a63d5-2b34-41f2-8247-8fd8da10006c", 00:10:41.402 "strip_size_kb": 64, 00:10:41.402 "state": "online", 00:10:41.402 "raid_level": "concat", 00:10:41.402 "superblock": true, 00:10:41.402 "num_base_bdevs": 2, 00:10:41.402 "num_base_bdevs_discovered": 2, 00:10:41.402 "num_base_bdevs_operational": 2, 00:10:41.402 "base_bdevs_list": [ 00:10:41.403 { 00:10:41.403 "name": "BaseBdev1", 00:10:41.403 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:41.403 "is_configured": true, 00:10:41.403 "data_offset": 2048, 00:10:41.403 "data_size": 63488 00:10:41.403 }, 00:10:41.403 { 00:10:41.403 "name": "BaseBdev2", 00:10:41.403 "uuid": "8beda1ac-bdc1-47fb-930c-7d21d47f41db", 00:10:41.403 "is_configured": true, 00:10:41.403 "data_offset": 2048, 00:10:41.403 "data_size": 63488 00:10:41.403 } 00:10:41.403 ] 00:10:41.403 }' 00:10:41.403 03:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:41.403 03:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:42.337 [2024-05-15 03:05:13.364350] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.337 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:42.337 "name": "Existed_Raid", 00:10:42.337 "aliases": [ 00:10:42.337 "d79a63d5-2b34-41f2-8247-8fd8da10006c" 00:10:42.337 ], 00:10:42.337 "product_name": "Raid Volume", 00:10:42.337 "block_size": 512, 00:10:42.337 "num_blocks": 126976, 00:10:42.337 "uuid": "d79a63d5-2b34-41f2-8247-8fd8da10006c", 00:10:42.337 "assigned_rate_limits": { 00:10:42.337 "rw_ios_per_sec": 0, 00:10:42.337 "rw_mbytes_per_sec": 0, 00:10:42.337 "r_mbytes_per_sec": 0, 00:10:42.337 "w_mbytes_per_sec": 0 00:10:42.337 }, 00:10:42.337 "claimed": false, 00:10:42.337 "zoned": false, 00:10:42.337 "supported_io_types": { 00:10:42.337 "read": true, 00:10:42.337 "write": true, 00:10:42.337 "unmap": true, 00:10:42.337 "write_zeroes": true, 00:10:42.337 "flush": true, 00:10:42.337 "reset": true, 00:10:42.337 "compare": false, 00:10:42.337 "compare_and_write": false, 00:10:42.337 "abort": false, 00:10:42.337 "nvme_admin": false, 00:10:42.337 "nvme_io": false 00:10:42.337 }, 00:10:42.337 "memory_domains": [ 00:10:42.337 { 00:10:42.337 "dma_device_id": "system", 00:10:42.337 "dma_device_type": 1 00:10:42.337 }, 00:10:42.337 { 00:10:42.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.337 "dma_device_type": 2 00:10:42.337 }, 00:10:42.337 { 00:10:42.337 "dma_device_id": "system", 00:10:42.337 "dma_device_type": 1 00:10:42.337 }, 00:10:42.337 { 00:10:42.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.338 "dma_device_type": 2 00:10:42.338 } 00:10:42.338 ], 00:10:42.338 "driver_specific": { 00:10:42.338 "raid": { 00:10:42.338 "uuid": "d79a63d5-2b34-41f2-8247-8fd8da10006c", 00:10:42.338 "strip_size_kb": 64, 00:10:42.338 "state": "online", 00:10:42.338 "raid_level": "concat", 00:10:42.338 "superblock": true, 00:10:42.338 "num_base_bdevs": 2, 00:10:42.338 "num_base_bdevs_discovered": 2, 00:10:42.338 "num_base_bdevs_operational": 2, 00:10:42.338 "base_bdevs_list": [ 00:10:42.338 { 00:10:42.338 "name": "BaseBdev1", 00:10:42.338 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:42.338 "is_configured": true, 00:10:42.338 "data_offset": 2048, 00:10:42.338 "data_size": 63488 00:10:42.338 }, 00:10:42.338 { 00:10:42.338 "name": "BaseBdev2", 00:10:42.338 "uuid": "8beda1ac-bdc1-47fb-930c-7d21d47f41db", 00:10:42.338 "is_configured": true, 00:10:42.338 "data_offset": 2048, 00:10:42.338 "data_size": 63488 00:10:42.338 } 00:10:42.338 ] 00:10:42.338 } 00:10:42.338 } 00:10:42.338 }' 00:10:42.338 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.338 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:10:42.338 BaseBdev2' 00:10:42.338 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:42.338 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:42.338 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:42.596 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:42.596 "name": "BaseBdev1", 00:10:42.596 "aliases": [ 00:10:42.596 "5f449975-2b6e-4e2b-9424-a451ea0e210c" 00:10:42.596 ], 00:10:42.596 "product_name": "Malloc disk", 00:10:42.596 "block_size": 512, 00:10:42.596 "num_blocks": 65536, 00:10:42.596 "uuid": "5f449975-2b6e-4e2b-9424-a451ea0e210c", 00:10:42.596 "assigned_rate_limits": { 00:10:42.596 "rw_ios_per_sec": 0, 00:10:42.596 "rw_mbytes_per_sec": 0, 00:10:42.596 "r_mbytes_per_sec": 0, 00:10:42.596 "w_mbytes_per_sec": 0 00:10:42.596 }, 00:10:42.596 "claimed": true, 00:10:42.596 "claim_type": "exclusive_write", 00:10:42.596 "zoned": false, 00:10:42.596 "supported_io_types": { 00:10:42.596 "read": true, 00:10:42.596 "write": true, 00:10:42.596 "unmap": true, 00:10:42.596 "write_zeroes": true, 00:10:42.596 "flush": true, 00:10:42.596 "reset": true, 00:10:42.596 "compare": false, 00:10:42.596 "compare_and_write": false, 00:10:42.596 "abort": true, 00:10:42.596 "nvme_admin": false, 00:10:42.596 "nvme_io": false 00:10:42.596 }, 00:10:42.596 "memory_domains": [ 00:10:42.596 { 00:10:42.596 "dma_device_id": "system", 00:10:42.596 "dma_device_type": 1 00:10:42.596 }, 00:10:42.596 { 00:10:42.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.596 "dma_device_type": 2 00:10:42.596 } 00:10:42.596 ], 00:10:42.596 "driver_specific": {} 00:10:42.596 }' 00:10:42.596 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:42.596 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:42.856 03:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:43.114 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:43.114 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:43.114 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:43.114 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:43.372 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:43.372 "name": "BaseBdev2", 00:10:43.372 "aliases": [ 00:10:43.372 "8beda1ac-bdc1-47fb-930c-7d21d47f41db" 00:10:43.372 ], 00:10:43.372 "product_name": "Malloc disk", 00:10:43.372 "block_size": 512, 00:10:43.372 "num_blocks": 65536, 00:10:43.372 "uuid": "8beda1ac-bdc1-47fb-930c-7d21d47f41db", 00:10:43.372 "assigned_rate_limits": { 00:10:43.372 "rw_ios_per_sec": 0, 00:10:43.372 "rw_mbytes_per_sec": 0, 00:10:43.372 "r_mbytes_per_sec": 0, 00:10:43.372 "w_mbytes_per_sec": 0 00:10:43.372 }, 00:10:43.372 "claimed": true, 00:10:43.372 "claim_type": "exclusive_write", 00:10:43.372 "zoned": false, 00:10:43.372 "supported_io_types": { 00:10:43.372 "read": true, 00:10:43.372 "write": true, 00:10:43.372 "unmap": true, 00:10:43.372 "write_zeroes": true, 00:10:43.373 "flush": true, 00:10:43.373 "reset": true, 00:10:43.373 "compare": false, 00:10:43.373 "compare_and_write": false, 00:10:43.373 "abort": true, 00:10:43.373 "nvme_admin": false, 00:10:43.373 "nvme_io": false 00:10:43.373 }, 00:10:43.373 "memory_domains": [ 00:10:43.373 { 00:10:43.373 "dma_device_id": "system", 00:10:43.373 "dma_device_type": 1 00:10:43.373 }, 00:10:43.373 { 00:10:43.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.373 "dma_device_type": 2 00:10:43.373 } 00:10:43.373 ], 00:10:43.373 "driver_specific": {} 00:10:43.373 }' 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:43.373 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:43.631 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:43.631 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:43.631 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:43.631 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:43.631 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:43.890 [2024-05-15 03:05:14.888234] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.890 [2024-05-15 03:05:14.888259] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.890 [2024-05-15 03:05:14.888298] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.890 03:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.148 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:44.148 "name": "Existed_Raid", 00:10:44.148 "uuid": "d79a63d5-2b34-41f2-8247-8fd8da10006c", 00:10:44.148 "strip_size_kb": 64, 00:10:44.148 "state": "offline", 00:10:44.148 "raid_level": "concat", 00:10:44.148 "superblock": true, 00:10:44.148 "num_base_bdevs": 2, 00:10:44.148 "num_base_bdevs_discovered": 1, 00:10:44.148 "num_base_bdevs_operational": 1, 00:10:44.148 "base_bdevs_list": [ 00:10:44.148 { 00:10:44.148 "name": null, 00:10:44.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.148 "is_configured": false, 00:10:44.148 "data_offset": 2048, 00:10:44.148 "data_size": 63488 00:10:44.148 }, 00:10:44.148 { 00:10:44.148 "name": "BaseBdev2", 00:10:44.148 "uuid": "8beda1ac-bdc1-47fb-930c-7d21d47f41db", 00:10:44.148 "is_configured": true, 00:10:44.148 "data_offset": 2048, 00:10:44.148 "data_size": 63488 00:10:44.148 } 00:10:44.148 ] 00:10:44.148 }' 00:10:44.148 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:44.148 03:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.714 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.714 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.714 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.714 03:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:44.972 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:44.972 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.972 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:45.230 [2024-05-15 03:05:16.281153] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.230 [2024-05-15 03:05:16.281203] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x219f6b0 name Existed_Raid, state offline 00:10:45.230 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.230 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.230 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.230 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4048724 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4048724 ']' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4048724 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4048724 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4048724' 00:10:45.489 killing process with pid 4048724 00:10:45.489 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4048724 00:10:45.489 [2024-05-15 03:05:16.607385] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.490 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4048724 00:10:45.490 [2024-05-15 03:05:16.608269] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.749 03:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:10:45.749 00:10:45.749 real 0m10.980s 00:10:45.749 user 0m20.018s 00:10:45.749 sys 0m1.601s 00:10:45.749 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:45.749 03:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.749 ************************************ 00:10:45.749 END TEST raid_state_function_test_sb 00:10:45.749 ************************************ 00:10:45.749 03:05:16 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:45.749 03:05:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:45.749 03:05:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:45.749 03:05:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.008 ************************************ 00:10:46.008 START TEST raid_superblock_test 00:10:46.008 ************************************ 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4050766 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4050766 /var/tmp/spdk-raid.sock 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4050766 ']' 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:46.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:46.008 03:05:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.008 [2024-05-15 03:05:16.969277] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:46.008 [2024-05-15 03:05:16.969335] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050766 ] 00:10:46.008 [2024-05-15 03:05:17.067088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.008 [2024-05-15 03:05:17.161626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.267 [2024-05-15 03:05:17.224940] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.267 [2024-05-15 03:05:17.224976] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.833 03:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:47.091 malloc1 00:10:47.091 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.349 [2024-05-15 03:05:18.411443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.349 [2024-05-15 03:05:18.411487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.349 [2024-05-15 03:05:18.411507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1278a00 00:10:47.349 [2024-05-15 03:05:18.411516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.349 [2024-05-15 03:05:18.413241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.349 [2024-05-15 03:05:18.413267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.349 pt1 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.349 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:47.608 malloc2 00:10:47.608 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.865 [2024-05-15 03:05:18.913470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.865 [2024-05-15 03:05:18.913513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.865 [2024-05-15 03:05:18.913532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12795f0 00:10:47.865 [2024-05-15 03:05:18.913542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.865 [2024-05-15 03:05:18.915096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.865 [2024-05-15 03:05:18.915121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.865 pt2 00:10:47.865 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.865 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.865 03:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:48.123 [2024-05-15 03:05:19.166157] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.123 [2024-05-15 03:05:19.167489] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.123 [2024-05-15 03:05:19.167635] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x141e760 00:10:48.123 [2024-05-15 03:05:19.167646] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:48.123 [2024-05-15 03:05:19.167845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128f8f0 00:10:48.123 [2024-05-15 03:05:19.168004] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x141e760 00:10:48.123 [2024-05-15 03:05:19.168017] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x141e760 00:10:48.123 [2024-05-15 03:05:19.168117] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.123 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.392 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:48.392 "name": "raid_bdev1", 00:10:48.392 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:48.392 "strip_size_kb": 64, 00:10:48.392 "state": "online", 00:10:48.392 "raid_level": "concat", 00:10:48.392 "superblock": true, 00:10:48.392 "num_base_bdevs": 2, 00:10:48.392 "num_base_bdevs_discovered": 2, 00:10:48.392 "num_base_bdevs_operational": 2, 00:10:48.392 "base_bdevs_list": [ 00:10:48.392 { 00:10:48.392 "name": "pt1", 00:10:48.392 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:48.392 "is_configured": true, 00:10:48.392 "data_offset": 2048, 00:10:48.392 "data_size": 63488 00:10:48.392 }, 00:10:48.392 { 00:10:48.392 "name": "pt2", 00:10:48.392 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:48.392 "is_configured": true, 00:10:48.392 "data_offset": 2048, 00:10:48.392 "data_size": 63488 00:10:48.392 } 00:10:48.392 ] 00:10:48.392 }' 00:10:48.392 03:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:48.392 03:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:48.983 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:49.241 [2024-05-15 03:05:20.297523] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:49.241 "name": "raid_bdev1", 00:10:49.241 "aliases": [ 00:10:49.241 "88030ae4-67f1-4207-beff-ba9baa315158" 00:10:49.241 ], 00:10:49.241 "product_name": "Raid Volume", 00:10:49.241 "block_size": 512, 00:10:49.241 "num_blocks": 126976, 00:10:49.241 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:49.241 "assigned_rate_limits": { 00:10:49.241 "rw_ios_per_sec": 0, 00:10:49.241 "rw_mbytes_per_sec": 0, 00:10:49.241 "r_mbytes_per_sec": 0, 00:10:49.241 "w_mbytes_per_sec": 0 00:10:49.241 }, 00:10:49.241 "claimed": false, 00:10:49.241 "zoned": false, 00:10:49.241 "supported_io_types": { 00:10:49.241 "read": true, 00:10:49.241 "write": true, 00:10:49.241 "unmap": true, 00:10:49.241 "write_zeroes": true, 00:10:49.241 "flush": true, 00:10:49.241 "reset": true, 00:10:49.241 "compare": false, 00:10:49.241 "compare_and_write": false, 00:10:49.241 "abort": false, 00:10:49.241 "nvme_admin": false, 00:10:49.241 "nvme_io": false 00:10:49.241 }, 00:10:49.241 "memory_domains": [ 00:10:49.241 { 00:10:49.241 "dma_device_id": "system", 00:10:49.241 "dma_device_type": 1 00:10:49.241 }, 00:10:49.241 { 00:10:49.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.241 "dma_device_type": 2 00:10:49.241 }, 00:10:49.241 { 00:10:49.241 "dma_device_id": "system", 00:10:49.241 "dma_device_type": 1 00:10:49.241 }, 00:10:49.241 { 00:10:49.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.241 "dma_device_type": 2 00:10:49.241 } 00:10:49.241 ], 00:10:49.241 "driver_specific": { 00:10:49.241 "raid": { 00:10:49.241 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:49.241 "strip_size_kb": 64, 00:10:49.241 "state": "online", 00:10:49.241 "raid_level": "concat", 00:10:49.241 "superblock": true, 00:10:49.241 "num_base_bdevs": 2, 00:10:49.241 "num_base_bdevs_discovered": 2, 00:10:49.241 "num_base_bdevs_operational": 2, 00:10:49.241 "base_bdevs_list": [ 00:10:49.241 { 00:10:49.241 "name": "pt1", 00:10:49.241 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:49.241 "is_configured": true, 00:10:49.241 "data_offset": 2048, 00:10:49.241 "data_size": 63488 00:10:49.241 }, 00:10:49.241 { 00:10:49.241 "name": "pt2", 00:10:49.241 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:49.241 "is_configured": true, 00:10:49.241 "data_offset": 2048, 00:10:49.241 "data_size": 63488 00:10:49.241 } 00:10:49.241 ] 00:10:49.241 } 00:10:49.241 } 00:10:49.241 }' 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:49.241 pt2' 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:49.241 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:49.499 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:49.499 "name": "pt1", 00:10:49.499 "aliases": [ 00:10:49.499 "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54" 00:10:49.499 ], 00:10:49.499 "product_name": "passthru", 00:10:49.499 "block_size": 512, 00:10:49.499 "num_blocks": 65536, 00:10:49.499 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:49.499 "assigned_rate_limits": { 00:10:49.499 "rw_ios_per_sec": 0, 00:10:49.499 "rw_mbytes_per_sec": 0, 00:10:49.499 "r_mbytes_per_sec": 0, 00:10:49.499 "w_mbytes_per_sec": 0 00:10:49.499 }, 00:10:49.499 "claimed": true, 00:10:49.499 "claim_type": "exclusive_write", 00:10:49.499 "zoned": false, 00:10:49.499 "supported_io_types": { 00:10:49.499 "read": true, 00:10:49.499 "write": true, 00:10:49.499 "unmap": true, 00:10:49.499 "write_zeroes": true, 00:10:49.499 "flush": true, 00:10:49.499 "reset": true, 00:10:49.499 "compare": false, 00:10:49.499 "compare_and_write": false, 00:10:49.499 "abort": true, 00:10:49.499 "nvme_admin": false, 00:10:49.499 "nvme_io": false 00:10:49.499 }, 00:10:49.499 "memory_domains": [ 00:10:49.499 { 00:10:49.499 "dma_device_id": "system", 00:10:49.499 "dma_device_type": 1 00:10:49.499 }, 00:10:49.499 { 00:10:49.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.499 "dma_device_type": 2 00:10:49.499 } 00:10:49.499 ], 00:10:49.499 "driver_specific": { 00:10:49.499 "passthru": { 00:10:49.499 "name": "pt1", 00:10:49.499 "base_bdev_name": "malloc1" 00:10:49.499 } 00:10:49.499 } 00:10:49.499 }' 00:10:49.499 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:49.758 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:50.016 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:50.016 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:50.016 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:50.016 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:50.016 03:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:50.274 "name": "pt2", 00:10:50.274 "aliases": [ 00:10:50.274 "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9" 00:10:50.274 ], 00:10:50.274 "product_name": "passthru", 00:10:50.274 "block_size": 512, 00:10:50.274 "num_blocks": 65536, 00:10:50.274 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:50.274 "assigned_rate_limits": { 00:10:50.274 "rw_ios_per_sec": 0, 00:10:50.274 "rw_mbytes_per_sec": 0, 00:10:50.274 "r_mbytes_per_sec": 0, 00:10:50.274 "w_mbytes_per_sec": 0 00:10:50.274 }, 00:10:50.274 "claimed": true, 00:10:50.274 "claim_type": "exclusive_write", 00:10:50.274 "zoned": false, 00:10:50.274 "supported_io_types": { 00:10:50.274 "read": true, 00:10:50.274 "write": true, 00:10:50.274 "unmap": true, 00:10:50.274 "write_zeroes": true, 00:10:50.274 "flush": true, 00:10:50.274 "reset": true, 00:10:50.274 "compare": false, 00:10:50.274 "compare_and_write": false, 00:10:50.274 "abort": true, 00:10:50.274 "nvme_admin": false, 00:10:50.274 "nvme_io": false 00:10:50.274 }, 00:10:50.274 "memory_domains": [ 00:10:50.274 { 00:10:50.274 "dma_device_id": "system", 00:10:50.274 "dma_device_type": 1 00:10:50.274 }, 00:10:50.274 { 00:10:50.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.274 "dma_device_type": 2 00:10:50.274 } 00:10:50.274 ], 00:10:50.274 "driver_specific": { 00:10:50.274 "passthru": { 00:10:50.274 "name": "pt2", 00:10:50.274 "base_bdev_name": "malloc2" 00:10:50.274 } 00:10:50.274 } 00:10:50.274 }' 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.274 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:50.533 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:50.792 [2024-05-15 03:05:21.825632] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.792 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=88030ae4-67f1-4207-beff-ba9baa315158 00:10:50.792 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 88030ae4-67f1-4207-beff-ba9baa315158 ']' 00:10:50.792 03:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:51.050 [2024-05-15 03:05:22.078086] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.050 [2024-05-15 03:05:22.078107] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.050 [2024-05-15 03:05:22.078160] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.050 [2024-05-15 03:05:22.078204] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.050 [2024-05-15 03:05:22.078213] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x141e760 name raid_bdev1, state offline 00:10:51.050 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.050 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:51.309 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:51.309 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:51.309 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.309 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:51.567 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.567 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:51.825 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:51.825 03:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:52.084 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:52.343 [2024-05-15 03:05:23.341398] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.343 [2024-05-15 03:05:23.342822] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.343 [2024-05-15 03:05:23.342881] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.343 [2024-05-15 03:05:23.342920] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.343 [2024-05-15 03:05:23.342935] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.343 [2024-05-15 03:05:23.342943] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x141ed40 name raid_bdev1, state configuring 00:10:52.343 request: 00:10:52.343 { 00:10:52.343 "name": "raid_bdev1", 00:10:52.343 "raid_level": "concat", 00:10:52.343 "base_bdevs": [ 00:10:52.343 "malloc1", 00:10:52.343 "malloc2" 00:10:52.343 ], 00:10:52.343 "superblock": false, 00:10:52.343 "strip_size_kb": 64, 00:10:52.343 "method": "bdev_raid_create", 00:10:52.343 "req_id": 1 00:10:52.343 } 00:10:52.343 Got JSON-RPC error response 00:10:52.343 response: 00:10:52.343 { 00:10:52.343 "code": -17, 00:10:52.343 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.343 } 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.343 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.601 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:52.601 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:52.601 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.859 [2024-05-15 03:05:23.850687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.859 [2024-05-15 03:05:23.850725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.859 [2024-05-15 03:05:23.850745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1420200 00:10:52.859 [2024-05-15 03:05:23.850754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.859 [2024-05-15 03:05:23.852431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.859 [2024-05-15 03:05:23.852457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.859 [2024-05-15 03:05:23.852517] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:52.859 [2024-05-15 03:05:23.852541] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.859 pt1 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.859 03:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.118 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:53.118 "name": "raid_bdev1", 00:10:53.118 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:53.118 "strip_size_kb": 64, 00:10:53.118 "state": "configuring", 00:10:53.118 "raid_level": "concat", 00:10:53.118 "superblock": true, 00:10:53.118 "num_base_bdevs": 2, 00:10:53.118 "num_base_bdevs_discovered": 1, 00:10:53.118 "num_base_bdevs_operational": 2, 00:10:53.118 "base_bdevs_list": [ 00:10:53.118 { 00:10:53.118 "name": "pt1", 00:10:53.118 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:53.118 "is_configured": true, 00:10:53.118 "data_offset": 2048, 00:10:53.118 "data_size": 63488 00:10:53.118 }, 00:10:53.118 { 00:10:53.118 "name": null, 00:10:53.118 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:53.118 "is_configured": false, 00:10:53.118 "data_offset": 2048, 00:10:53.118 "data_size": 63488 00:10:53.118 } 00:10:53.118 ] 00:10:53.118 }' 00:10:53.118 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:53.118 03:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.686 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:53.686 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:53.686 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.686 03:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.946 [2024-05-15 03:05:24.981731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.946 [2024-05-15 03:05:24.981775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.947 [2024-05-15 03:05:24.981792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x141dc20 00:10:53.947 [2024-05-15 03:05:24.981800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.947 [2024-05-15 03:05:24.982148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.947 [2024-05-15 03:05:24.982163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.947 [2024-05-15 03:05:24.982220] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:53.947 [2024-05-15 03:05:24.982236] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.947 [2024-05-15 03:05:24.982331] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1420cc0 00:10:53.947 [2024-05-15 03:05:24.982340] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:53.947 [2024-05-15 03:05:24.982516] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12715d0 00:10:53.947 [2024-05-15 03:05:24.982643] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1420cc0 00:10:53.947 [2024-05-15 03:05:24.982652] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1420cc0 00:10:53.947 [2024-05-15 03:05:24.982748] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.947 pt2 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.947 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.206 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:54.206 "name": "raid_bdev1", 00:10:54.206 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:54.206 "strip_size_kb": 64, 00:10:54.206 "state": "online", 00:10:54.206 "raid_level": "concat", 00:10:54.206 "superblock": true, 00:10:54.206 "num_base_bdevs": 2, 00:10:54.206 "num_base_bdevs_discovered": 2, 00:10:54.206 "num_base_bdevs_operational": 2, 00:10:54.206 "base_bdevs_list": [ 00:10:54.206 { 00:10:54.206 "name": "pt1", 00:10:54.206 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:54.206 "is_configured": true, 00:10:54.206 "data_offset": 2048, 00:10:54.206 "data_size": 63488 00:10:54.206 }, 00:10:54.206 { 00:10:54.206 "name": "pt2", 00:10:54.206 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:54.206 "is_configured": true, 00:10:54.206 "data_offset": 2048, 00:10:54.206 "data_size": 63488 00:10:54.206 } 00:10:54.206 ] 00:10:54.206 }' 00:10:54.206 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:54.206 03:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:54.775 03:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:55.035 [2024-05-15 03:05:26.113002] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.035 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:55.035 "name": "raid_bdev1", 00:10:55.035 "aliases": [ 00:10:55.035 "88030ae4-67f1-4207-beff-ba9baa315158" 00:10:55.035 ], 00:10:55.035 "product_name": "Raid Volume", 00:10:55.035 "block_size": 512, 00:10:55.035 "num_blocks": 126976, 00:10:55.035 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:55.035 "assigned_rate_limits": { 00:10:55.035 "rw_ios_per_sec": 0, 00:10:55.035 "rw_mbytes_per_sec": 0, 00:10:55.035 "r_mbytes_per_sec": 0, 00:10:55.035 "w_mbytes_per_sec": 0 00:10:55.035 }, 00:10:55.035 "claimed": false, 00:10:55.035 "zoned": false, 00:10:55.035 "supported_io_types": { 00:10:55.035 "read": true, 00:10:55.035 "write": true, 00:10:55.035 "unmap": true, 00:10:55.035 "write_zeroes": true, 00:10:55.035 "flush": true, 00:10:55.035 "reset": true, 00:10:55.035 "compare": false, 00:10:55.035 "compare_and_write": false, 00:10:55.035 "abort": false, 00:10:55.035 "nvme_admin": false, 00:10:55.035 "nvme_io": false 00:10:55.035 }, 00:10:55.035 "memory_domains": [ 00:10:55.035 { 00:10:55.035 "dma_device_id": "system", 00:10:55.035 "dma_device_type": 1 00:10:55.035 }, 00:10:55.035 { 00:10:55.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.035 "dma_device_type": 2 00:10:55.035 }, 00:10:55.035 { 00:10:55.035 "dma_device_id": "system", 00:10:55.035 "dma_device_type": 1 00:10:55.035 }, 00:10:55.035 { 00:10:55.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.035 "dma_device_type": 2 00:10:55.035 } 00:10:55.035 ], 00:10:55.035 "driver_specific": { 00:10:55.035 "raid": { 00:10:55.035 "uuid": "88030ae4-67f1-4207-beff-ba9baa315158", 00:10:55.035 "strip_size_kb": 64, 00:10:55.035 "state": "online", 00:10:55.035 "raid_level": "concat", 00:10:55.035 "superblock": true, 00:10:55.035 "num_base_bdevs": 2, 00:10:55.035 "num_base_bdevs_discovered": 2, 00:10:55.036 "num_base_bdevs_operational": 2, 00:10:55.036 "base_bdevs_list": [ 00:10:55.036 { 00:10:55.036 "name": "pt1", 00:10:55.036 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:55.036 "is_configured": true, 00:10:55.036 "data_offset": 2048, 00:10:55.036 "data_size": 63488 00:10:55.036 }, 00:10:55.036 { 00:10:55.036 "name": "pt2", 00:10:55.036 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:55.036 "is_configured": true, 00:10:55.036 "data_offset": 2048, 00:10:55.036 "data_size": 63488 00:10:55.036 } 00:10:55.036 ] 00:10:55.036 } 00:10:55.036 } 00:10:55.036 }' 00:10:55.036 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.036 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:55.036 pt2' 00:10:55.036 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:55.036 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:55.036 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:55.295 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:55.295 "name": "pt1", 00:10:55.295 "aliases": [ 00:10:55.295 "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54" 00:10:55.295 ], 00:10:55.295 "product_name": "passthru", 00:10:55.295 "block_size": 512, 00:10:55.295 "num_blocks": 65536, 00:10:55.295 "uuid": "6ad3bdf1-d212-5d3d-954a-c0c42cbdbc54", 00:10:55.295 "assigned_rate_limits": { 00:10:55.296 "rw_ios_per_sec": 0, 00:10:55.296 "rw_mbytes_per_sec": 0, 00:10:55.296 "r_mbytes_per_sec": 0, 00:10:55.296 "w_mbytes_per_sec": 0 00:10:55.296 }, 00:10:55.296 "claimed": true, 00:10:55.296 "claim_type": "exclusive_write", 00:10:55.296 "zoned": false, 00:10:55.296 "supported_io_types": { 00:10:55.296 "read": true, 00:10:55.296 "write": true, 00:10:55.296 "unmap": true, 00:10:55.296 "write_zeroes": true, 00:10:55.296 "flush": true, 00:10:55.296 "reset": true, 00:10:55.296 "compare": false, 00:10:55.296 "compare_and_write": false, 00:10:55.296 "abort": true, 00:10:55.296 "nvme_admin": false, 00:10:55.296 "nvme_io": false 00:10:55.296 }, 00:10:55.296 "memory_domains": [ 00:10:55.296 { 00:10:55.296 "dma_device_id": "system", 00:10:55.296 "dma_device_type": 1 00:10:55.296 }, 00:10:55.296 { 00:10:55.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.296 "dma_device_type": 2 00:10:55.296 } 00:10:55.296 ], 00:10:55.296 "driver_specific": { 00:10:55.296 "passthru": { 00:10:55.296 "name": "pt1", 00:10:55.296 "base_bdev_name": "malloc1" 00:10:55.296 } 00:10:55.296 } 00:10:55.296 }' 00:10:55.296 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:55.555 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:55.814 03:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:56.073 "name": "pt2", 00:10:56.073 "aliases": [ 00:10:56.073 "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9" 00:10:56.073 ], 00:10:56.073 "product_name": "passthru", 00:10:56.073 "block_size": 512, 00:10:56.073 "num_blocks": 65536, 00:10:56.073 "uuid": "1c83a32b-cba1-5c9f-928b-a4ab8bb78bd9", 00:10:56.073 "assigned_rate_limits": { 00:10:56.073 "rw_ios_per_sec": 0, 00:10:56.073 "rw_mbytes_per_sec": 0, 00:10:56.073 "r_mbytes_per_sec": 0, 00:10:56.073 "w_mbytes_per_sec": 0 00:10:56.073 }, 00:10:56.073 "claimed": true, 00:10:56.073 "claim_type": "exclusive_write", 00:10:56.073 "zoned": false, 00:10:56.073 "supported_io_types": { 00:10:56.073 "read": true, 00:10:56.073 "write": true, 00:10:56.073 "unmap": true, 00:10:56.073 "write_zeroes": true, 00:10:56.073 "flush": true, 00:10:56.073 "reset": true, 00:10:56.073 "compare": false, 00:10:56.073 "compare_and_write": false, 00:10:56.073 "abort": true, 00:10:56.073 "nvme_admin": false, 00:10:56.073 "nvme_io": false 00:10:56.073 }, 00:10:56.073 "memory_domains": [ 00:10:56.073 { 00:10:56.073 "dma_device_id": "system", 00:10:56.073 "dma_device_type": 1 00:10:56.073 }, 00:10:56.073 { 00:10:56.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.073 "dma_device_type": 2 00:10:56.073 } 00:10:56.073 ], 00:10:56.073 "driver_specific": { 00:10:56.073 "passthru": { 00:10:56.073 "name": "pt2", 00:10:56.073 "base_bdev_name": "malloc2" 00:10:56.073 } 00:10:56.073 } 00:10:56.073 }' 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.073 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:56.332 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:56.591 [2024-05-15 03:05:27.605003] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 88030ae4-67f1-4207-beff-ba9baa315158 '!=' 88030ae4-67f1-4207-beff-ba9baa315158 ']' 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4050766 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4050766 ']' 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4050766 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4050766 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4050766' 00:10:56.591 killing process with pid 4050766 00:10:56.591 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4050766 00:10:56.591 [2024-05-15 03:05:27.668482] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.591 [2024-05-15 03:05:27.668539] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.592 [2024-05-15 03:05:27.668579] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.592 [2024-05-15 03:05:27.668587] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1420cc0 name raid_bdev1, state offline 00:10:56.592 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4050766 00:10:56.592 [2024-05-15 03:05:27.684699] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.851 03:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:10:56.851 00:10:56.851 real 0m10.995s 00:10:56.851 user 0m20.096s 00:10:56.851 sys 0m1.567s 00:10:56.851 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:56.851 03:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 ************************************ 00:10:56.851 END TEST raid_superblock_test 00:10:56.851 ************************************ 00:10:56.851 03:05:27 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:10:56.851 03:05:27 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:56.851 03:05:27 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:56.851 03:05:27 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:56.851 03:05:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 ************************************ 00:10:56.851 START TEST raid_state_function_test 00:10:56.851 ************************************ 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4052804 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4052804' 00:10:56.851 Process raid pid: 4052804 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4052804 /var/tmp/spdk-raid.sock 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4052804 ']' 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:56.851 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:56.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:56.852 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:56.852 03:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.111 [2024-05-15 03:05:28.043025] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:57.111 [2024-05-15 03:05:28.043077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.111 [2024-05-15 03:05:28.143256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.111 [2024-05-15 03:05:28.239018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.370 [2024-05-15 03:05:28.304524] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.370 [2024-05-15 03:05:28.304555] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.937 03:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:57.937 03:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:10:57.937 03:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:58.195 [2024-05-15 03:05:29.221093] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.195 [2024-05-15 03:05:29.221132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.195 [2024-05-15 03:05:29.221142] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.195 [2024-05-15 03:05:29.221151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.195 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.453 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:58.453 "name": "Existed_Raid", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.453 "strip_size_kb": 0, 00:10:58.453 "state": "configuring", 00:10:58.453 "raid_level": "raid1", 00:10:58.453 "superblock": false, 00:10:58.453 "num_base_bdevs": 2, 00:10:58.453 "num_base_bdevs_discovered": 0, 00:10:58.453 "num_base_bdevs_operational": 2, 00:10:58.453 "base_bdevs_list": [ 00:10:58.453 { 00:10:58.453 "name": "BaseBdev1", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.453 "is_configured": false, 00:10:58.453 "data_offset": 0, 00:10:58.453 "data_size": 0 00:10:58.453 }, 00:10:58.453 { 00:10:58.453 "name": "BaseBdev2", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.453 "is_configured": false, 00:10:58.453 "data_offset": 0, 00:10:58.453 "data_size": 0 00:10:58.453 } 00:10:58.453 ] 00:10:58.453 }' 00:10:58.453 03:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:58.453 03:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.021 03:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:59.279 [2024-05-15 03:05:30.327928] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.279 [2024-05-15 03:05:30.327960] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x172fdc0 name Existed_Raid, state configuring 00:10:59.279 03:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:59.538 [2024-05-15 03:05:30.584608] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.538 [2024-05-15 03:05:30.584632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.538 [2024-05-15 03:05:30.584640] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.538 [2024-05-15 03:05:30.584648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.538 03:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.795 [2024-05-15 03:05:30.851027] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.795 BaseBdev1 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:59.795 03:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.053 03:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.312 [ 00:11:00.312 { 00:11:00.312 "name": "BaseBdev1", 00:11:00.312 "aliases": [ 00:11:00.312 "f359a4e2-7185-43b7-922e-b5f11fd16284" 00:11:00.312 ], 00:11:00.312 "product_name": "Malloc disk", 00:11:00.312 "block_size": 512, 00:11:00.312 "num_blocks": 65536, 00:11:00.312 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:00.312 "assigned_rate_limits": { 00:11:00.312 "rw_ios_per_sec": 0, 00:11:00.312 "rw_mbytes_per_sec": 0, 00:11:00.312 "r_mbytes_per_sec": 0, 00:11:00.312 "w_mbytes_per_sec": 0 00:11:00.312 }, 00:11:00.312 "claimed": true, 00:11:00.312 "claim_type": "exclusive_write", 00:11:00.312 "zoned": false, 00:11:00.312 "supported_io_types": { 00:11:00.312 "read": true, 00:11:00.312 "write": true, 00:11:00.312 "unmap": true, 00:11:00.312 "write_zeroes": true, 00:11:00.312 "flush": true, 00:11:00.312 "reset": true, 00:11:00.312 "compare": false, 00:11:00.312 "compare_and_write": false, 00:11:00.312 "abort": true, 00:11:00.312 "nvme_admin": false, 00:11:00.312 "nvme_io": false 00:11:00.312 }, 00:11:00.312 "memory_domains": [ 00:11:00.312 { 00:11:00.312 "dma_device_id": "system", 00:11:00.312 "dma_device_type": 1 00:11:00.312 }, 00:11:00.312 { 00:11:00.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.312 "dma_device_type": 2 00:11:00.312 } 00:11:00.312 ], 00:11:00.312 "driver_specific": {} 00:11:00.312 } 00:11:00.312 ] 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.312 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.572 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:00.572 "name": "Existed_Raid", 00:11:00.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.572 "strip_size_kb": 0, 00:11:00.572 "state": "configuring", 00:11:00.572 "raid_level": "raid1", 00:11:00.572 "superblock": false, 00:11:00.572 "num_base_bdevs": 2, 00:11:00.572 "num_base_bdevs_discovered": 1, 00:11:00.572 "num_base_bdevs_operational": 2, 00:11:00.572 "base_bdevs_list": [ 00:11:00.572 { 00:11:00.572 "name": "BaseBdev1", 00:11:00.572 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:00.572 "is_configured": true, 00:11:00.572 "data_offset": 0, 00:11:00.572 "data_size": 65536 00:11:00.572 }, 00:11:00.572 { 00:11:00.572 "name": "BaseBdev2", 00:11:00.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.572 "is_configured": false, 00:11:00.572 "data_offset": 0, 00:11:00.572 "data_size": 0 00:11:00.572 } 00:11:00.572 ] 00:11:00.572 }' 00:11:00.572 03:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:00.572 03:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.139 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:01.397 [2024-05-15 03:05:32.495434] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.397 [2024-05-15 03:05:32.495470] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1730060 name Existed_Raid, state configuring 00:11:01.397 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:01.656 [2024-05-15 03:05:32.752131] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.656 [2024-05-15 03:05:32.753643] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.656 [2024-05-15 03:05:32.753672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.656 03:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.914 03:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:01.914 "name": "Existed_Raid", 00:11:01.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.914 "strip_size_kb": 0, 00:11:01.914 "state": "configuring", 00:11:01.914 "raid_level": "raid1", 00:11:01.914 "superblock": false, 00:11:01.914 "num_base_bdevs": 2, 00:11:01.914 "num_base_bdevs_discovered": 1, 00:11:01.914 "num_base_bdevs_operational": 2, 00:11:01.914 "base_bdevs_list": [ 00:11:01.914 { 00:11:01.914 "name": "BaseBdev1", 00:11:01.914 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:01.915 "is_configured": true, 00:11:01.915 "data_offset": 0, 00:11:01.915 "data_size": 65536 00:11:01.915 }, 00:11:01.915 { 00:11:01.915 "name": "BaseBdev2", 00:11:01.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.915 "is_configured": false, 00:11:01.915 "data_offset": 0, 00:11:01.915 "data_size": 0 00:11:01.915 } 00:11:01.915 ] 00:11:01.915 }' 00:11:01.915 03:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:01.915 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.528 03:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.786 [2024-05-15 03:05:33.898348] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.786 [2024-05-15 03:05:33.898386] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x172f6b0 00:11:02.786 [2024-05-15 03:05:33.898393] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:02.786 [2024-05-15 03:05:33.898586] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x172fc70 00:11:02.786 [2024-05-15 03:05:33.898715] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x172f6b0 00:11:02.786 [2024-05-15 03:05:33.898723] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x172f6b0 00:11:02.786 [2024-05-15 03:05:33.898894] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.786 BaseBdev2 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:02.786 03:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:03.045 03:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.303 [ 00:11:03.303 { 00:11:03.303 "name": "BaseBdev2", 00:11:03.303 "aliases": [ 00:11:03.303 "c8d1b053-ce97-4775-9003-1750aded2918" 00:11:03.303 ], 00:11:03.303 "product_name": "Malloc disk", 00:11:03.303 "block_size": 512, 00:11:03.303 "num_blocks": 65536, 00:11:03.303 "uuid": "c8d1b053-ce97-4775-9003-1750aded2918", 00:11:03.303 "assigned_rate_limits": { 00:11:03.303 "rw_ios_per_sec": 0, 00:11:03.303 "rw_mbytes_per_sec": 0, 00:11:03.303 "r_mbytes_per_sec": 0, 00:11:03.303 "w_mbytes_per_sec": 0 00:11:03.303 }, 00:11:03.303 "claimed": true, 00:11:03.303 "claim_type": "exclusive_write", 00:11:03.303 "zoned": false, 00:11:03.303 "supported_io_types": { 00:11:03.303 "read": true, 00:11:03.303 "write": true, 00:11:03.303 "unmap": true, 00:11:03.303 "write_zeroes": true, 00:11:03.303 "flush": true, 00:11:03.303 "reset": true, 00:11:03.303 "compare": false, 00:11:03.303 "compare_and_write": false, 00:11:03.303 "abort": true, 00:11:03.303 "nvme_admin": false, 00:11:03.303 "nvme_io": false 00:11:03.303 }, 00:11:03.303 "memory_domains": [ 00:11:03.303 { 00:11:03.303 "dma_device_id": "system", 00:11:03.303 "dma_device_type": 1 00:11:03.303 }, 00:11:03.303 { 00:11:03.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.303 "dma_device_type": 2 00:11:03.303 } 00:11:03.303 ], 00:11:03.303 "driver_specific": {} 00:11:03.303 } 00:11:03.303 ] 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.303 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.560 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:03.560 "name": "Existed_Raid", 00:11:03.560 "uuid": "3611dd3d-dcf3-43b9-adf8-8d59d3262da7", 00:11:03.560 "strip_size_kb": 0, 00:11:03.560 "state": "online", 00:11:03.560 "raid_level": "raid1", 00:11:03.560 "superblock": false, 00:11:03.560 "num_base_bdevs": 2, 00:11:03.560 "num_base_bdevs_discovered": 2, 00:11:03.560 "num_base_bdevs_operational": 2, 00:11:03.560 "base_bdevs_list": [ 00:11:03.560 { 00:11:03.560 "name": "BaseBdev1", 00:11:03.560 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:03.561 "is_configured": true, 00:11:03.561 "data_offset": 0, 00:11:03.561 "data_size": 65536 00:11:03.561 }, 00:11:03.561 { 00:11:03.561 "name": "BaseBdev2", 00:11:03.561 "uuid": "c8d1b053-ce97-4775-9003-1750aded2918", 00:11:03.561 "is_configured": true, 00:11:03.561 "data_offset": 0, 00:11:03.561 "data_size": 65536 00:11:03.561 } 00:11:03.561 ] 00:11:03.561 }' 00:11:03.561 03:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:03.561 03:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:04.494 [2024-05-15 03:05:35.535000] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.494 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:04.494 "name": "Existed_Raid", 00:11:04.494 "aliases": [ 00:11:04.494 "3611dd3d-dcf3-43b9-adf8-8d59d3262da7" 00:11:04.494 ], 00:11:04.494 "product_name": "Raid Volume", 00:11:04.494 "block_size": 512, 00:11:04.494 "num_blocks": 65536, 00:11:04.494 "uuid": "3611dd3d-dcf3-43b9-adf8-8d59d3262da7", 00:11:04.494 "assigned_rate_limits": { 00:11:04.494 "rw_ios_per_sec": 0, 00:11:04.494 "rw_mbytes_per_sec": 0, 00:11:04.494 "r_mbytes_per_sec": 0, 00:11:04.494 "w_mbytes_per_sec": 0 00:11:04.494 }, 00:11:04.494 "claimed": false, 00:11:04.494 "zoned": false, 00:11:04.494 "supported_io_types": { 00:11:04.494 "read": true, 00:11:04.494 "write": true, 00:11:04.494 "unmap": false, 00:11:04.494 "write_zeroes": true, 00:11:04.494 "flush": false, 00:11:04.494 "reset": true, 00:11:04.494 "compare": false, 00:11:04.494 "compare_and_write": false, 00:11:04.494 "abort": false, 00:11:04.494 "nvme_admin": false, 00:11:04.494 "nvme_io": false 00:11:04.494 }, 00:11:04.494 "memory_domains": [ 00:11:04.494 { 00:11:04.494 "dma_device_id": "system", 00:11:04.494 "dma_device_type": 1 00:11:04.494 }, 00:11:04.494 { 00:11:04.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.494 "dma_device_type": 2 00:11:04.494 }, 00:11:04.494 { 00:11:04.494 "dma_device_id": "system", 00:11:04.494 "dma_device_type": 1 00:11:04.494 }, 00:11:04.494 { 00:11:04.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.494 "dma_device_type": 2 00:11:04.494 } 00:11:04.494 ], 00:11:04.494 "driver_specific": { 00:11:04.494 "raid": { 00:11:04.494 "uuid": "3611dd3d-dcf3-43b9-adf8-8d59d3262da7", 00:11:04.494 "strip_size_kb": 0, 00:11:04.494 "state": "online", 00:11:04.494 "raid_level": "raid1", 00:11:04.494 "superblock": false, 00:11:04.495 "num_base_bdevs": 2, 00:11:04.495 "num_base_bdevs_discovered": 2, 00:11:04.495 "num_base_bdevs_operational": 2, 00:11:04.495 "base_bdevs_list": [ 00:11:04.495 { 00:11:04.495 "name": "BaseBdev1", 00:11:04.495 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:04.495 "is_configured": true, 00:11:04.495 "data_offset": 0, 00:11:04.495 "data_size": 65536 00:11:04.495 }, 00:11:04.495 { 00:11:04.495 "name": "BaseBdev2", 00:11:04.495 "uuid": "c8d1b053-ce97-4775-9003-1750aded2918", 00:11:04.495 "is_configured": true, 00:11:04.495 "data_offset": 0, 00:11:04.495 "data_size": 65536 00:11:04.495 } 00:11:04.495 ] 00:11:04.495 } 00:11:04.495 } 00:11:04.495 }' 00:11:04.495 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.495 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:11:04.495 BaseBdev2' 00:11:04.495 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:04.495 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:04.495 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:04.752 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:04.752 "name": "BaseBdev1", 00:11:04.752 "aliases": [ 00:11:04.752 "f359a4e2-7185-43b7-922e-b5f11fd16284" 00:11:04.752 ], 00:11:04.752 "product_name": "Malloc disk", 00:11:04.752 "block_size": 512, 00:11:04.752 "num_blocks": 65536, 00:11:04.752 "uuid": "f359a4e2-7185-43b7-922e-b5f11fd16284", 00:11:04.752 "assigned_rate_limits": { 00:11:04.752 "rw_ios_per_sec": 0, 00:11:04.752 "rw_mbytes_per_sec": 0, 00:11:04.753 "r_mbytes_per_sec": 0, 00:11:04.753 "w_mbytes_per_sec": 0 00:11:04.753 }, 00:11:04.753 "claimed": true, 00:11:04.753 "claim_type": "exclusive_write", 00:11:04.753 "zoned": false, 00:11:04.753 "supported_io_types": { 00:11:04.753 "read": true, 00:11:04.753 "write": true, 00:11:04.753 "unmap": true, 00:11:04.753 "write_zeroes": true, 00:11:04.753 "flush": true, 00:11:04.753 "reset": true, 00:11:04.753 "compare": false, 00:11:04.753 "compare_and_write": false, 00:11:04.753 "abort": true, 00:11:04.753 "nvme_admin": false, 00:11:04.753 "nvme_io": false 00:11:04.753 }, 00:11:04.753 "memory_domains": [ 00:11:04.753 { 00:11:04.753 "dma_device_id": "system", 00:11:04.753 "dma_device_type": 1 00:11:04.753 }, 00:11:04.753 { 00:11:04.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.753 "dma_device_type": 2 00:11:04.753 } 00:11:04.753 ], 00:11:04.753 "driver_specific": {} 00:11:04.753 }' 00:11:04.753 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:04.753 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:05.010 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:05.010 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:05.010 03:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:05.010 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:05.268 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:05.268 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:05.268 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:05.268 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:05.525 "name": "BaseBdev2", 00:11:05.525 "aliases": [ 00:11:05.525 "c8d1b053-ce97-4775-9003-1750aded2918" 00:11:05.525 ], 00:11:05.525 "product_name": "Malloc disk", 00:11:05.525 "block_size": 512, 00:11:05.525 "num_blocks": 65536, 00:11:05.525 "uuid": "c8d1b053-ce97-4775-9003-1750aded2918", 00:11:05.525 "assigned_rate_limits": { 00:11:05.525 "rw_ios_per_sec": 0, 00:11:05.525 "rw_mbytes_per_sec": 0, 00:11:05.525 "r_mbytes_per_sec": 0, 00:11:05.525 "w_mbytes_per_sec": 0 00:11:05.525 }, 00:11:05.525 "claimed": true, 00:11:05.525 "claim_type": "exclusive_write", 00:11:05.525 "zoned": false, 00:11:05.525 "supported_io_types": { 00:11:05.525 "read": true, 00:11:05.525 "write": true, 00:11:05.525 "unmap": true, 00:11:05.525 "write_zeroes": true, 00:11:05.525 "flush": true, 00:11:05.525 "reset": true, 00:11:05.525 "compare": false, 00:11:05.525 "compare_and_write": false, 00:11:05.525 "abort": true, 00:11:05.525 "nvme_admin": false, 00:11:05.525 "nvme_io": false 00:11:05.525 }, 00:11:05.525 "memory_domains": [ 00:11:05.525 { 00:11:05.525 "dma_device_id": "system", 00:11:05.525 "dma_device_type": 1 00:11:05.525 }, 00:11:05.525 { 00:11:05.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.525 "dma_device_type": 2 00:11:05.525 } 00:11:05.525 ], 00:11:05.525 "driver_specific": {} 00:11:05.525 }' 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:05.525 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:05.782 03:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:06.039 [2024-05-15 03:05:37.054873] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.039 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.296 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:06.296 "name": "Existed_Raid", 00:11:06.296 "uuid": "3611dd3d-dcf3-43b9-adf8-8d59d3262da7", 00:11:06.296 "strip_size_kb": 0, 00:11:06.296 "state": "online", 00:11:06.296 "raid_level": "raid1", 00:11:06.296 "superblock": false, 00:11:06.296 "num_base_bdevs": 2, 00:11:06.296 "num_base_bdevs_discovered": 1, 00:11:06.296 "num_base_bdevs_operational": 1, 00:11:06.296 "base_bdevs_list": [ 00:11:06.296 { 00:11:06.296 "name": null, 00:11:06.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.296 "is_configured": false, 00:11:06.296 "data_offset": 0, 00:11:06.296 "data_size": 65536 00:11:06.296 }, 00:11:06.296 { 00:11:06.296 "name": "BaseBdev2", 00:11:06.296 "uuid": "c8d1b053-ce97-4775-9003-1750aded2918", 00:11:06.296 "is_configured": true, 00:11:06.296 "data_offset": 0, 00:11:06.296 "data_size": 65536 00:11:06.296 } 00:11:06.296 ] 00:11:06.296 }' 00:11:06.296 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:06.296 03:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.861 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.861 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.861 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.861 03:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:07.120 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:07.120 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.120 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:07.379 [2024-05-15 03:05:38.427771] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.379 [2024-05-15 03:05:38.427839] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.379 [2024-05-15 03:05:38.438775] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.379 [2024-05-15 03:05:38.438842] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.379 [2024-05-15 03:05:38.438867] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x172f6b0 name Existed_Raid, state offline 00:11:07.379 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.379 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.379 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.379 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4052804 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4052804 ']' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4052804 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4052804 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4052804' 00:11:07.638 killing process with pid 4052804 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4052804 00:11:07.638 [2024-05-15 03:05:38.762747] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.638 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4052804 00:11:07.638 [2024-05-15 03:05:38.763626] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.897 03:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:11:07.897 00:11:07.897 real 0m11.006s 00:11:07.897 user 0m20.057s 00:11:07.897 sys 0m1.565s 00:11:07.897 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.897 03:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.897 ************************************ 00:11:07.897 END TEST raid_state_function_test 00:11:07.897 ************************************ 00:11:07.897 03:05:39 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:07.897 03:05:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:07.897 03:05:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.897 03:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.157 ************************************ 00:11:08.157 START TEST raid_state_function_test_sb 00:11:08.157 ************************************ 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4054873 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4054873' 00:11:08.157 Process raid pid: 4054873 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4054873 /var/tmp/spdk-raid.sock 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4054873 ']' 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:08.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:08.157 03:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.157 [2024-05-15 03:05:39.127224] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:08.157 [2024-05-15 03:05:39.127281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.157 [2024-05-15 03:05:39.216604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.157 [2024-05-15 03:05:39.309731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.416 [2024-05-15 03:05:39.369706] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.416 [2024-05-15 03:05:39.369738] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.984 03:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:08.984 03:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:11:08.984 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:09.243 [2024-05-15 03:05:40.316546] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.243 [2024-05-15 03:05:40.316586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.243 [2024-05-15 03:05:40.316596] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.243 [2024-05-15 03:05:40.316605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.243 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.503 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:09.503 "name": "Existed_Raid", 00:11:09.503 "uuid": "0c11d778-7ab6-4eef-a69a-384caa53c280", 00:11:09.503 "strip_size_kb": 0, 00:11:09.503 "state": "configuring", 00:11:09.503 "raid_level": "raid1", 00:11:09.503 "superblock": true, 00:11:09.503 "num_base_bdevs": 2, 00:11:09.503 "num_base_bdevs_discovered": 0, 00:11:09.503 "num_base_bdevs_operational": 2, 00:11:09.503 "base_bdevs_list": [ 00:11:09.503 { 00:11:09.503 "name": "BaseBdev1", 00:11:09.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.503 "is_configured": false, 00:11:09.503 "data_offset": 0, 00:11:09.503 "data_size": 0 00:11:09.503 }, 00:11:09.503 { 00:11:09.503 "name": "BaseBdev2", 00:11:09.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.503 "is_configured": false, 00:11:09.503 "data_offset": 0, 00:11:09.503 "data_size": 0 00:11:09.503 } 00:11:09.503 ] 00:11:09.503 }' 00:11:09.503 03:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:09.503 03:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.071 03:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:10.331 [2024-05-15 03:05:41.375224] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.331 [2024-05-15 03:05:41.375257] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x130bdc0 name Existed_Raid, state configuring 00:11:10.331 03:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:10.590 [2024-05-15 03:05:41.547698] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.590 [2024-05-15 03:05:41.547722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.590 [2024-05-15 03:05:41.547730] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.590 [2024-05-15 03:05:41.547739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.590 03:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.590 [2024-05-15 03:05:41.729628] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.590 BaseBdev1 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:10.850 03:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.109 [ 00:11:11.109 { 00:11:11.109 "name": "BaseBdev1", 00:11:11.109 "aliases": [ 00:11:11.109 "44241ab0-5720-4d29-98cc-c906ff3f0e31" 00:11:11.110 ], 00:11:11.110 "product_name": "Malloc disk", 00:11:11.110 "block_size": 512, 00:11:11.110 "num_blocks": 65536, 00:11:11.110 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:11.110 "assigned_rate_limits": { 00:11:11.110 "rw_ios_per_sec": 0, 00:11:11.110 "rw_mbytes_per_sec": 0, 00:11:11.110 "r_mbytes_per_sec": 0, 00:11:11.110 "w_mbytes_per_sec": 0 00:11:11.110 }, 00:11:11.110 "claimed": true, 00:11:11.110 "claim_type": "exclusive_write", 00:11:11.110 "zoned": false, 00:11:11.110 "supported_io_types": { 00:11:11.110 "read": true, 00:11:11.110 "write": true, 00:11:11.110 "unmap": true, 00:11:11.110 "write_zeroes": true, 00:11:11.110 "flush": true, 00:11:11.110 "reset": true, 00:11:11.110 "compare": false, 00:11:11.110 "compare_and_write": false, 00:11:11.110 "abort": true, 00:11:11.110 "nvme_admin": false, 00:11:11.110 "nvme_io": false 00:11:11.110 }, 00:11:11.110 "memory_domains": [ 00:11:11.110 { 00:11:11.110 "dma_device_id": "system", 00:11:11.110 "dma_device_type": 1 00:11:11.110 }, 00:11:11.110 { 00:11:11.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.110 "dma_device_type": 2 00:11:11.110 } 00:11:11.110 ], 00:11:11.110 "driver_specific": {} 00:11:11.110 } 00:11:11.110 ] 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.110 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.370 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:11.370 "name": "Existed_Raid", 00:11:11.370 "uuid": "5be37a74-9a82-4a20-a947-cc74cb8572f9", 00:11:11.370 "strip_size_kb": 0, 00:11:11.370 "state": "configuring", 00:11:11.370 "raid_level": "raid1", 00:11:11.370 "superblock": true, 00:11:11.370 "num_base_bdevs": 2, 00:11:11.370 "num_base_bdevs_discovered": 1, 00:11:11.370 "num_base_bdevs_operational": 2, 00:11:11.370 "base_bdevs_list": [ 00:11:11.370 { 00:11:11.370 "name": "BaseBdev1", 00:11:11.370 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:11.370 "is_configured": true, 00:11:11.370 "data_offset": 2048, 00:11:11.370 "data_size": 63488 00:11:11.370 }, 00:11:11.370 { 00:11:11.370 "name": "BaseBdev2", 00:11:11.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.370 "is_configured": false, 00:11:11.370 "data_offset": 0, 00:11:11.370 "data_size": 0 00:11:11.370 } 00:11:11.370 ] 00:11:11.370 }' 00:11:11.370 03:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:11.370 03:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.938 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:12.197 [2024-05-15 03:05:43.221620] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.197 [2024-05-15 03:05:43.221659] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x130c060 name Existed_Raid, state configuring 00:11:12.197 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:12.461 [2024-05-15 03:05:43.482353] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.461 [2024-05-15 03:05:43.483893] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.461 [2024-05-15 03:05:43.483925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.461 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.725 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:12.725 "name": "Existed_Raid", 00:11:12.725 "uuid": "f3205172-5d0b-4fc2-89e2-067541cbd4fa", 00:11:12.725 "strip_size_kb": 0, 00:11:12.725 "state": "configuring", 00:11:12.725 "raid_level": "raid1", 00:11:12.725 "superblock": true, 00:11:12.725 "num_base_bdevs": 2, 00:11:12.725 "num_base_bdevs_discovered": 1, 00:11:12.725 "num_base_bdevs_operational": 2, 00:11:12.725 "base_bdevs_list": [ 00:11:12.725 { 00:11:12.725 "name": "BaseBdev1", 00:11:12.725 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:12.725 "is_configured": true, 00:11:12.725 "data_offset": 2048, 00:11:12.725 "data_size": 63488 00:11:12.725 }, 00:11:12.725 { 00:11:12.725 "name": "BaseBdev2", 00:11:12.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.725 "is_configured": false, 00:11:12.725 "data_offset": 0, 00:11:12.725 "data_size": 0 00:11:12.725 } 00:11:12.725 ] 00:11:12.725 }' 00:11:12.725 03:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:12.725 03:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.294 03:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.552 [2024-05-15 03:05:44.612592] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.552 [2024-05-15 03:05:44.612737] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x130b6b0 00:11:13.552 [2024-05-15 03:05:44.612750] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.553 [2024-05-15 03:05:44.612949] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x130bc70 00:11:13.553 [2024-05-15 03:05:44.613083] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x130b6b0 00:11:13.553 [2024-05-15 03:05:44.613092] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x130b6b0 00:11:13.553 [2024-05-15 03:05:44.613189] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.553 BaseBdev2 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:13.553 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:13.811 03:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.070 [ 00:11:14.070 { 00:11:14.070 "name": "BaseBdev2", 00:11:14.070 "aliases": [ 00:11:14.070 "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9" 00:11:14.070 ], 00:11:14.070 "product_name": "Malloc disk", 00:11:14.070 "block_size": 512, 00:11:14.070 "num_blocks": 65536, 00:11:14.070 "uuid": "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9", 00:11:14.070 "assigned_rate_limits": { 00:11:14.070 "rw_ios_per_sec": 0, 00:11:14.070 "rw_mbytes_per_sec": 0, 00:11:14.070 "r_mbytes_per_sec": 0, 00:11:14.070 "w_mbytes_per_sec": 0 00:11:14.070 }, 00:11:14.070 "claimed": true, 00:11:14.070 "claim_type": "exclusive_write", 00:11:14.070 "zoned": false, 00:11:14.070 "supported_io_types": { 00:11:14.070 "read": true, 00:11:14.070 "write": true, 00:11:14.070 "unmap": true, 00:11:14.070 "write_zeroes": true, 00:11:14.070 "flush": true, 00:11:14.070 "reset": true, 00:11:14.070 "compare": false, 00:11:14.070 "compare_and_write": false, 00:11:14.070 "abort": true, 00:11:14.070 "nvme_admin": false, 00:11:14.070 "nvme_io": false 00:11:14.070 }, 00:11:14.070 "memory_domains": [ 00:11:14.070 { 00:11:14.070 "dma_device_id": "system", 00:11:14.070 "dma_device_type": 1 00:11:14.070 }, 00:11:14.070 { 00:11:14.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.070 "dma_device_type": 2 00:11:14.070 } 00:11:14.070 ], 00:11:14.070 "driver_specific": {} 00:11:14.070 } 00:11:14.070 ] 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.070 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.329 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:14.329 "name": "Existed_Raid", 00:11:14.329 "uuid": "f3205172-5d0b-4fc2-89e2-067541cbd4fa", 00:11:14.329 "strip_size_kb": 0, 00:11:14.329 "state": "online", 00:11:14.329 "raid_level": "raid1", 00:11:14.329 "superblock": true, 00:11:14.329 "num_base_bdevs": 2, 00:11:14.329 "num_base_bdevs_discovered": 2, 00:11:14.329 "num_base_bdevs_operational": 2, 00:11:14.329 "base_bdevs_list": [ 00:11:14.329 { 00:11:14.329 "name": "BaseBdev1", 00:11:14.329 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:14.329 "is_configured": true, 00:11:14.329 "data_offset": 2048, 00:11:14.329 "data_size": 63488 00:11:14.329 }, 00:11:14.329 { 00:11:14.329 "name": "BaseBdev2", 00:11:14.329 "uuid": "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9", 00:11:14.329 "is_configured": true, 00:11:14.329 "data_offset": 2048, 00:11:14.329 "data_size": 63488 00:11:14.329 } 00:11:14.329 ] 00:11:14.329 }' 00:11:14.329 03:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:14.329 03:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:14.896 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:15.155 [2024-05-15 03:05:46.253424] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.155 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:15.155 "name": "Existed_Raid", 00:11:15.155 "aliases": [ 00:11:15.155 "f3205172-5d0b-4fc2-89e2-067541cbd4fa" 00:11:15.155 ], 00:11:15.155 "product_name": "Raid Volume", 00:11:15.155 "block_size": 512, 00:11:15.155 "num_blocks": 63488, 00:11:15.155 "uuid": "f3205172-5d0b-4fc2-89e2-067541cbd4fa", 00:11:15.155 "assigned_rate_limits": { 00:11:15.155 "rw_ios_per_sec": 0, 00:11:15.155 "rw_mbytes_per_sec": 0, 00:11:15.155 "r_mbytes_per_sec": 0, 00:11:15.155 "w_mbytes_per_sec": 0 00:11:15.155 }, 00:11:15.155 "claimed": false, 00:11:15.155 "zoned": false, 00:11:15.155 "supported_io_types": { 00:11:15.155 "read": true, 00:11:15.155 "write": true, 00:11:15.155 "unmap": false, 00:11:15.155 "write_zeroes": true, 00:11:15.155 "flush": false, 00:11:15.155 "reset": true, 00:11:15.155 "compare": false, 00:11:15.155 "compare_and_write": false, 00:11:15.155 "abort": false, 00:11:15.155 "nvme_admin": false, 00:11:15.155 "nvme_io": false 00:11:15.155 }, 00:11:15.155 "memory_domains": [ 00:11:15.155 { 00:11:15.155 "dma_device_id": "system", 00:11:15.155 "dma_device_type": 1 00:11:15.155 }, 00:11:15.155 { 00:11:15.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.155 "dma_device_type": 2 00:11:15.155 }, 00:11:15.155 { 00:11:15.155 "dma_device_id": "system", 00:11:15.155 "dma_device_type": 1 00:11:15.155 }, 00:11:15.155 { 00:11:15.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.155 "dma_device_type": 2 00:11:15.155 } 00:11:15.155 ], 00:11:15.155 "driver_specific": { 00:11:15.155 "raid": { 00:11:15.155 "uuid": "f3205172-5d0b-4fc2-89e2-067541cbd4fa", 00:11:15.155 "strip_size_kb": 0, 00:11:15.155 "state": "online", 00:11:15.155 "raid_level": "raid1", 00:11:15.155 "superblock": true, 00:11:15.155 "num_base_bdevs": 2, 00:11:15.155 "num_base_bdevs_discovered": 2, 00:11:15.155 "num_base_bdevs_operational": 2, 00:11:15.155 "base_bdevs_list": [ 00:11:15.155 { 00:11:15.155 "name": "BaseBdev1", 00:11:15.155 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:15.155 "is_configured": true, 00:11:15.155 "data_offset": 2048, 00:11:15.155 "data_size": 63488 00:11:15.155 }, 00:11:15.155 { 00:11:15.155 "name": "BaseBdev2", 00:11:15.155 "uuid": "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9", 00:11:15.155 "is_configured": true, 00:11:15.155 "data_offset": 2048, 00:11:15.155 "data_size": 63488 00:11:15.155 } 00:11:15.155 ] 00:11:15.155 } 00:11:15.155 } 00:11:15.155 }' 00:11:15.155 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.413 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:11:15.413 BaseBdev2' 00:11:15.413 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:15.413 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:15.413 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:15.672 "name": "BaseBdev1", 00:11:15.672 "aliases": [ 00:11:15.672 "44241ab0-5720-4d29-98cc-c906ff3f0e31" 00:11:15.672 ], 00:11:15.672 "product_name": "Malloc disk", 00:11:15.672 "block_size": 512, 00:11:15.672 "num_blocks": 65536, 00:11:15.672 "uuid": "44241ab0-5720-4d29-98cc-c906ff3f0e31", 00:11:15.672 "assigned_rate_limits": { 00:11:15.672 "rw_ios_per_sec": 0, 00:11:15.672 "rw_mbytes_per_sec": 0, 00:11:15.672 "r_mbytes_per_sec": 0, 00:11:15.672 "w_mbytes_per_sec": 0 00:11:15.672 }, 00:11:15.672 "claimed": true, 00:11:15.672 "claim_type": "exclusive_write", 00:11:15.672 "zoned": false, 00:11:15.672 "supported_io_types": { 00:11:15.672 "read": true, 00:11:15.672 "write": true, 00:11:15.672 "unmap": true, 00:11:15.672 "write_zeroes": true, 00:11:15.672 "flush": true, 00:11:15.672 "reset": true, 00:11:15.672 "compare": false, 00:11:15.672 "compare_and_write": false, 00:11:15.672 "abort": true, 00:11:15.672 "nvme_admin": false, 00:11:15.672 "nvme_io": false 00:11:15.672 }, 00:11:15.672 "memory_domains": [ 00:11:15.672 { 00:11:15.672 "dma_device_id": "system", 00:11:15.672 "dma_device_type": 1 00:11:15.672 }, 00:11:15.672 { 00:11:15.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.672 "dma_device_type": 2 00:11:15.672 } 00:11:15.672 ], 00:11:15.672 "driver_specific": {} 00:11:15.672 }' 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:15.672 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:15.931 03:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:16.189 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:16.189 "name": "BaseBdev2", 00:11:16.189 "aliases": [ 00:11:16.189 "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9" 00:11:16.189 ], 00:11:16.189 "product_name": "Malloc disk", 00:11:16.189 "block_size": 512, 00:11:16.189 "num_blocks": 65536, 00:11:16.189 "uuid": "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9", 00:11:16.189 "assigned_rate_limits": { 00:11:16.189 "rw_ios_per_sec": 0, 00:11:16.189 "rw_mbytes_per_sec": 0, 00:11:16.189 "r_mbytes_per_sec": 0, 00:11:16.189 "w_mbytes_per_sec": 0 00:11:16.189 }, 00:11:16.189 "claimed": true, 00:11:16.189 "claim_type": "exclusive_write", 00:11:16.189 "zoned": false, 00:11:16.189 "supported_io_types": { 00:11:16.189 "read": true, 00:11:16.189 "write": true, 00:11:16.189 "unmap": true, 00:11:16.189 "write_zeroes": true, 00:11:16.189 "flush": true, 00:11:16.189 "reset": true, 00:11:16.189 "compare": false, 00:11:16.189 "compare_and_write": false, 00:11:16.189 "abort": true, 00:11:16.189 "nvme_admin": false, 00:11:16.189 "nvme_io": false 00:11:16.190 }, 00:11:16.190 "memory_domains": [ 00:11:16.190 { 00:11:16.190 "dma_device_id": "system", 00:11:16.190 "dma_device_type": 1 00:11:16.190 }, 00:11:16.190 { 00:11:16.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.190 "dma_device_type": 2 00:11:16.190 } 00:11:16.190 ], 00:11:16.190 "driver_specific": {} 00:11:16.190 }' 00:11:16.190 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:16.190 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:16.190 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:16.190 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:16.190 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:16.490 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:16.782 [2024-05-15 03:05:47.801370] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.782 03:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.040 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:17.040 "name": "Existed_Raid", 00:11:17.040 "uuid": "f3205172-5d0b-4fc2-89e2-067541cbd4fa", 00:11:17.040 "strip_size_kb": 0, 00:11:17.040 "state": "online", 00:11:17.040 "raid_level": "raid1", 00:11:17.040 "superblock": true, 00:11:17.040 "num_base_bdevs": 2, 00:11:17.040 "num_base_bdevs_discovered": 1, 00:11:17.040 "num_base_bdevs_operational": 1, 00:11:17.040 "base_bdevs_list": [ 00:11:17.040 { 00:11:17.040 "name": null, 00:11:17.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.040 "is_configured": false, 00:11:17.040 "data_offset": 2048, 00:11:17.040 "data_size": 63488 00:11:17.040 }, 00:11:17.040 { 00:11:17.040 "name": "BaseBdev2", 00:11:17.040 "uuid": "a7aa8372-87b6-4f6b-9327-49b8a20f7ad9", 00:11:17.040 "is_configured": true, 00:11:17.040 "data_offset": 2048, 00:11:17.040 "data_size": 63488 00:11:17.040 } 00:11:17.040 ] 00:11:17.040 }' 00:11:17.040 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:17.040 03:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.607 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.607 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.607 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.607 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:17.866 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:17.866 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.866 03:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:18.125 [2024-05-15 03:05:49.186208] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.125 [2024-05-15 03:05:49.186283] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.125 [2024-05-15 03:05:49.196832] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.125 [2024-05-15 03:05:49.196906] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.125 [2024-05-15 03:05:49.196917] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x130b6b0 name Existed_Raid, state offline 00:11:18.125 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.125 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.125 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.125 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4054873 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4054873 ']' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4054873 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4054873 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4054873' 00:11:18.384 killing process with pid 4054873 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4054873 00:11:18.384 [2024-05-15 03:05:49.522517] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.384 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4054873 00:11:18.384 [2024-05-15 03:05:49.523368] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.643 03:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:11:18.643 00:11:18.643 real 0m10.681s 00:11:18.643 user 0m19.371s 00:11:18.643 sys 0m1.615s 00:11:18.643 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.643 03:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.643 ************************************ 00:11:18.643 END TEST raid_state_function_test_sb 00:11:18.643 ************************************ 00:11:18.643 03:05:49 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:18.643 03:05:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:18.643 03:05:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.643 03:05:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.902 ************************************ 00:11:18.902 START TEST raid_superblock_test 00:11:18.902 ************************************ 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4056886 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4056886 /var/tmp/spdk-raid.sock 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4056886 ']' 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:18.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:18.902 03:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.902 [2024-05-15 03:05:49.876833] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:18.902 [2024-05-15 03:05:49.876893] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056886 ] 00:11:18.902 [2024-05-15 03:05:49.972917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.161 [2024-05-15 03:05:50.079557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.161 [2024-05-15 03:05:50.141659] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.161 [2024-05-15 03:05:50.141695] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.729 03:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:19.988 malloc1 00:11:19.988 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.246 [2024-05-15 03:05:51.263870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.246 [2024-05-15 03:05:51.263913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.247 [2024-05-15 03:05:51.263935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdfda00 00:11:20.247 [2024-05-15 03:05:51.263944] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.247 [2024-05-15 03:05:51.265642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.247 [2024-05-15 03:05:51.265668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.247 pt1 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.247 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:20.506 malloc2 00:11:20.506 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.765 [2024-05-15 03:05:51.777933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.765 [2024-05-15 03:05:51.777974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.765 [2024-05-15 03:05:51.777997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdfe5f0 00:11:20.765 [2024-05-15 03:05:51.778007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.765 [2024-05-15 03:05:51.779569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.765 [2024-05-15 03:05:51.779595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.765 pt2 00:11:20.765 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.765 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.765 03:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:21.024 [2024-05-15 03:05:52.030622] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:21.024 [2024-05-15 03:05:52.031979] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.024 [2024-05-15 03:05:52.032129] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xfa3760 00:11:21.024 [2024-05-15 03:05:52.032141] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.024 [2024-05-15 03:05:52.032343] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfb0560 00:11:21.024 [2024-05-15 03:05:52.032500] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfa3760 00:11:21.024 [2024-05-15 03:05:52.032509] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfa3760 00:11:21.024 [2024-05-15 03:05:52.032613] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.024 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.283 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:21.283 "name": "raid_bdev1", 00:11:21.283 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:21.283 "strip_size_kb": 0, 00:11:21.283 "state": "online", 00:11:21.283 "raid_level": "raid1", 00:11:21.283 "superblock": true, 00:11:21.283 "num_base_bdevs": 2, 00:11:21.283 "num_base_bdevs_discovered": 2, 00:11:21.283 "num_base_bdevs_operational": 2, 00:11:21.283 "base_bdevs_list": [ 00:11:21.283 { 00:11:21.283 "name": "pt1", 00:11:21.283 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:21.283 "is_configured": true, 00:11:21.283 "data_offset": 2048, 00:11:21.283 "data_size": 63488 00:11:21.283 }, 00:11:21.283 { 00:11:21.283 "name": "pt2", 00:11:21.283 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:21.283 "is_configured": true, 00:11:21.283 "data_offset": 2048, 00:11:21.283 "data_size": 63488 00:11:21.283 } 00:11:21.283 ] 00:11:21.283 }' 00:11:21.283 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:21.283 03:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:21.850 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:11:21.850 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:21.850 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:21.850 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:21.851 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:21.851 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:21.851 03:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:22.109 [2024-05-15 03:05:53.149810] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.109 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:22.109 "name": "raid_bdev1", 00:11:22.109 "aliases": [ 00:11:22.109 "52f64011-aebf-4a96-94a5-649dee4d5993" 00:11:22.109 ], 00:11:22.109 "product_name": "Raid Volume", 00:11:22.109 "block_size": 512, 00:11:22.109 "num_blocks": 63488, 00:11:22.109 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:22.109 "assigned_rate_limits": { 00:11:22.109 "rw_ios_per_sec": 0, 00:11:22.109 "rw_mbytes_per_sec": 0, 00:11:22.109 "r_mbytes_per_sec": 0, 00:11:22.109 "w_mbytes_per_sec": 0 00:11:22.109 }, 00:11:22.109 "claimed": false, 00:11:22.109 "zoned": false, 00:11:22.109 "supported_io_types": { 00:11:22.109 "read": true, 00:11:22.109 "write": true, 00:11:22.109 "unmap": false, 00:11:22.109 "write_zeroes": true, 00:11:22.109 "flush": false, 00:11:22.109 "reset": true, 00:11:22.109 "compare": false, 00:11:22.109 "compare_and_write": false, 00:11:22.109 "abort": false, 00:11:22.109 "nvme_admin": false, 00:11:22.109 "nvme_io": false 00:11:22.109 }, 00:11:22.109 "memory_domains": [ 00:11:22.109 { 00:11:22.109 "dma_device_id": "system", 00:11:22.109 "dma_device_type": 1 00:11:22.109 }, 00:11:22.109 { 00:11:22.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.109 "dma_device_type": 2 00:11:22.109 }, 00:11:22.109 { 00:11:22.109 "dma_device_id": "system", 00:11:22.109 "dma_device_type": 1 00:11:22.109 }, 00:11:22.109 { 00:11:22.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.109 "dma_device_type": 2 00:11:22.109 } 00:11:22.109 ], 00:11:22.109 "driver_specific": { 00:11:22.109 "raid": { 00:11:22.109 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:22.109 "strip_size_kb": 0, 00:11:22.109 "state": "online", 00:11:22.110 "raid_level": "raid1", 00:11:22.110 "superblock": true, 00:11:22.110 "num_base_bdevs": 2, 00:11:22.110 "num_base_bdevs_discovered": 2, 00:11:22.110 "num_base_bdevs_operational": 2, 00:11:22.110 "base_bdevs_list": [ 00:11:22.110 { 00:11:22.110 "name": "pt1", 00:11:22.110 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:22.110 "is_configured": true, 00:11:22.110 "data_offset": 2048, 00:11:22.110 "data_size": 63488 00:11:22.110 }, 00:11:22.110 { 00:11:22.110 "name": "pt2", 00:11:22.110 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:22.110 "is_configured": true, 00:11:22.110 "data_offset": 2048, 00:11:22.110 "data_size": 63488 00:11:22.110 } 00:11:22.110 ] 00:11:22.110 } 00:11:22.110 } 00:11:22.110 }' 00:11:22.110 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.110 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:11:22.110 pt2' 00:11:22.110 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:22.110 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:22.110 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:22.368 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:22.368 "name": "pt1", 00:11:22.368 "aliases": [ 00:11:22.368 "8f840dce-2361-5c16-849f-cd7db81b8113" 00:11:22.368 ], 00:11:22.368 "product_name": "passthru", 00:11:22.368 "block_size": 512, 00:11:22.368 "num_blocks": 65536, 00:11:22.368 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:22.368 "assigned_rate_limits": { 00:11:22.368 "rw_ios_per_sec": 0, 00:11:22.368 "rw_mbytes_per_sec": 0, 00:11:22.368 "r_mbytes_per_sec": 0, 00:11:22.368 "w_mbytes_per_sec": 0 00:11:22.368 }, 00:11:22.368 "claimed": true, 00:11:22.368 "claim_type": "exclusive_write", 00:11:22.368 "zoned": false, 00:11:22.368 "supported_io_types": { 00:11:22.368 "read": true, 00:11:22.368 "write": true, 00:11:22.368 "unmap": true, 00:11:22.368 "write_zeroes": true, 00:11:22.368 "flush": true, 00:11:22.368 "reset": true, 00:11:22.368 "compare": false, 00:11:22.369 "compare_and_write": false, 00:11:22.369 "abort": true, 00:11:22.369 "nvme_admin": false, 00:11:22.369 "nvme_io": false 00:11:22.369 }, 00:11:22.369 "memory_domains": [ 00:11:22.369 { 00:11:22.369 "dma_device_id": "system", 00:11:22.369 "dma_device_type": 1 00:11:22.369 }, 00:11:22.369 { 00:11:22.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.369 "dma_device_type": 2 00:11:22.369 } 00:11:22.369 ], 00:11:22.369 "driver_specific": { 00:11:22.369 "passthru": { 00:11:22.369 "name": "pt1", 00:11:22.369 "base_bdev_name": "malloc1" 00:11:22.369 } 00:11:22.369 } 00:11:22.369 }' 00:11:22.369 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.369 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.628 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.887 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.887 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:22.887 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:22.887 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:22.887 03:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:23.145 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:23.145 "name": "pt2", 00:11:23.146 "aliases": [ 00:11:23.146 "fb69b2bc-1062-5e81-83d4-7681c7660844" 00:11:23.146 ], 00:11:23.146 "product_name": "passthru", 00:11:23.146 "block_size": 512, 00:11:23.146 "num_blocks": 65536, 00:11:23.146 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:23.146 "assigned_rate_limits": { 00:11:23.146 "rw_ios_per_sec": 0, 00:11:23.146 "rw_mbytes_per_sec": 0, 00:11:23.146 "r_mbytes_per_sec": 0, 00:11:23.146 "w_mbytes_per_sec": 0 00:11:23.146 }, 00:11:23.146 "claimed": true, 00:11:23.146 "claim_type": "exclusive_write", 00:11:23.146 "zoned": false, 00:11:23.146 "supported_io_types": { 00:11:23.146 "read": true, 00:11:23.146 "write": true, 00:11:23.146 "unmap": true, 00:11:23.146 "write_zeroes": true, 00:11:23.146 "flush": true, 00:11:23.146 "reset": true, 00:11:23.146 "compare": false, 00:11:23.146 "compare_and_write": false, 00:11:23.146 "abort": true, 00:11:23.146 "nvme_admin": false, 00:11:23.146 "nvme_io": false 00:11:23.146 }, 00:11:23.146 "memory_domains": [ 00:11:23.146 { 00:11:23.146 "dma_device_id": "system", 00:11:23.146 "dma_device_type": 1 00:11:23.146 }, 00:11:23.146 { 00:11:23.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.146 "dma_device_type": 2 00:11:23.146 } 00:11:23.146 ], 00:11:23.146 "driver_specific": { 00:11:23.146 "passthru": { 00:11:23.146 "name": "pt2", 00:11:23.146 "base_bdev_name": "malloc2" 00:11:23.146 } 00:11:23.146 } 00:11:23.146 }' 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:23.146 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:23.405 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:23.664 [2024-05-15 03:05:54.689927] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.664 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=52f64011-aebf-4a96-94a5-649dee4d5993 00:11:23.664 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 52f64011-aebf-4a96-94a5-649dee4d5993 ']' 00:11:23.664 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:23.922 [2024-05-15 03:05:54.942374] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.922 [2024-05-15 03:05:54.942396] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.922 [2024-05-15 03:05:54.942449] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.922 [2024-05-15 03:05:54.942503] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.922 [2024-05-15 03:05:54.942511] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfa3760 name raid_bdev1, state offline 00:11:23.922 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.922 03:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:24.181 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:24.181 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:24.181 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:24.181 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:24.440 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:24.440 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:24.700 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:24.700 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.959 03:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:25.219 [2024-05-15 03:05:56.205689] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:25.219 [2024-05-15 03:05:56.207112] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:25.219 [2024-05-15 03:05:56.207165] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:25.219 [2024-05-15 03:05:56.207203] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:25.219 [2024-05-15 03:05:56.207218] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.219 [2024-05-15 03:05:56.207226] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfa5190 name raid_bdev1, state configuring 00:11:25.219 request: 00:11:25.219 { 00:11:25.219 "name": "raid_bdev1", 00:11:25.219 "raid_level": "raid1", 00:11:25.219 "base_bdevs": [ 00:11:25.219 "malloc1", 00:11:25.219 "malloc2" 00:11:25.219 ], 00:11:25.219 "superblock": false, 00:11:25.219 "method": "bdev_raid_create", 00:11:25.219 "req_id": 1 00:11:25.219 } 00:11:25.219 Got JSON-RPC error response 00:11:25.219 response: 00:11:25.219 { 00:11:25.219 "code": -17, 00:11:25.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:25.219 } 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.219 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:25.478 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:25.478 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:25.478 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.736 [2024-05-15 03:05:56.706958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.737 [2024-05-15 03:05:56.706991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.737 [2024-05-15 03:05:56.707013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfa1320 00:11:25.737 [2024-05-15 03:05:56.707023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.737 [2024-05-15 03:05:56.708680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.737 [2024-05-15 03:05:56.708707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.737 [2024-05-15 03:05:56.708767] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:25.737 [2024-05-15 03:05:56.708790] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.737 pt1 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.737 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.996 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:25.996 "name": "raid_bdev1", 00:11:25.996 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:25.996 "strip_size_kb": 0, 00:11:25.996 "state": "configuring", 00:11:25.996 "raid_level": "raid1", 00:11:25.996 "superblock": true, 00:11:25.996 "num_base_bdevs": 2, 00:11:25.996 "num_base_bdevs_discovered": 1, 00:11:25.996 "num_base_bdevs_operational": 2, 00:11:25.996 "base_bdevs_list": [ 00:11:25.996 { 00:11:25.996 "name": "pt1", 00:11:25.996 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:25.996 "is_configured": true, 00:11:25.996 "data_offset": 2048, 00:11:25.996 "data_size": 63488 00:11:25.996 }, 00:11:25.996 { 00:11:25.996 "name": null, 00:11:25.996 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:25.996 "is_configured": false, 00:11:25.996 "data_offset": 2048, 00:11:25.996 "data_size": 63488 00:11:25.996 } 00:11:25.996 ] 00:11:25.996 }' 00:11:25.996 03:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:25.996 03:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.564 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:26.564 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.564 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.564 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.823 [2024-05-15 03:05:57.825968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.823 [2024-05-15 03:05:57.826012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.823 [2024-05-15 03:05:57.826032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdfdc30 00:11:26.823 [2024-05-15 03:05:57.826042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.823 [2024-05-15 03:05:57.826381] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.823 [2024-05-15 03:05:57.826398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.823 [2024-05-15 03:05:57.826455] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:26.823 [2024-05-15 03:05:57.826472] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.823 [2024-05-15 03:05:57.826569] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xfa5cc0 00:11:26.823 [2024-05-15 03:05:57.826578] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.823 [2024-05-15 03:05:57.826750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdf65d0 00:11:26.823 [2024-05-15 03:05:57.826890] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfa5cc0 00:11:26.823 [2024-05-15 03:05:57.826899] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfa5cc0 00:11:26.823 [2024-05-15 03:05:57.826998] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.823 pt2 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.823 03:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.082 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.082 "name": "raid_bdev1", 00:11:27.082 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:27.082 "strip_size_kb": 0, 00:11:27.082 "state": "online", 00:11:27.082 "raid_level": "raid1", 00:11:27.082 "superblock": true, 00:11:27.082 "num_base_bdevs": 2, 00:11:27.082 "num_base_bdevs_discovered": 2, 00:11:27.082 "num_base_bdevs_operational": 2, 00:11:27.082 "base_bdevs_list": [ 00:11:27.082 { 00:11:27.082 "name": "pt1", 00:11:27.082 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:27.082 "is_configured": true, 00:11:27.082 "data_offset": 2048, 00:11:27.082 "data_size": 63488 00:11:27.082 }, 00:11:27.082 { 00:11:27.082 "name": "pt2", 00:11:27.082 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:27.082 "is_configured": true, 00:11:27.082 "data_offset": 2048, 00:11:27.082 "data_size": 63488 00:11:27.082 } 00:11:27.082 ] 00:11:27.082 }' 00:11:27.082 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.082 03:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:27.655 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:27.656 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:27.656 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:27.918 [2024-05-15 03:05:58.969252] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.918 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:27.918 "name": "raid_bdev1", 00:11:27.918 "aliases": [ 00:11:27.918 "52f64011-aebf-4a96-94a5-649dee4d5993" 00:11:27.918 ], 00:11:27.918 "product_name": "Raid Volume", 00:11:27.918 "block_size": 512, 00:11:27.918 "num_blocks": 63488, 00:11:27.918 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:27.918 "assigned_rate_limits": { 00:11:27.918 "rw_ios_per_sec": 0, 00:11:27.918 "rw_mbytes_per_sec": 0, 00:11:27.918 "r_mbytes_per_sec": 0, 00:11:27.918 "w_mbytes_per_sec": 0 00:11:27.918 }, 00:11:27.918 "claimed": false, 00:11:27.918 "zoned": false, 00:11:27.918 "supported_io_types": { 00:11:27.918 "read": true, 00:11:27.918 "write": true, 00:11:27.918 "unmap": false, 00:11:27.918 "write_zeroes": true, 00:11:27.918 "flush": false, 00:11:27.918 "reset": true, 00:11:27.918 "compare": false, 00:11:27.918 "compare_and_write": false, 00:11:27.918 "abort": false, 00:11:27.918 "nvme_admin": false, 00:11:27.918 "nvme_io": false 00:11:27.918 }, 00:11:27.918 "memory_domains": [ 00:11:27.918 { 00:11:27.918 "dma_device_id": "system", 00:11:27.918 "dma_device_type": 1 00:11:27.918 }, 00:11:27.918 { 00:11:27.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.918 "dma_device_type": 2 00:11:27.918 }, 00:11:27.918 { 00:11:27.918 "dma_device_id": "system", 00:11:27.918 "dma_device_type": 1 00:11:27.918 }, 00:11:27.918 { 00:11:27.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.918 "dma_device_type": 2 00:11:27.918 } 00:11:27.918 ], 00:11:27.918 "driver_specific": { 00:11:27.918 "raid": { 00:11:27.918 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:27.918 "strip_size_kb": 0, 00:11:27.918 "state": "online", 00:11:27.918 "raid_level": "raid1", 00:11:27.918 "superblock": true, 00:11:27.918 "num_base_bdevs": 2, 00:11:27.918 "num_base_bdevs_discovered": 2, 00:11:27.918 "num_base_bdevs_operational": 2, 00:11:27.918 "base_bdevs_list": [ 00:11:27.918 { 00:11:27.918 "name": "pt1", 00:11:27.918 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:27.918 "is_configured": true, 00:11:27.918 "data_offset": 2048, 00:11:27.918 "data_size": 63488 00:11:27.918 }, 00:11:27.918 { 00:11:27.918 "name": "pt2", 00:11:27.918 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:27.918 "is_configured": true, 00:11:27.918 "data_offset": 2048, 00:11:27.918 "data_size": 63488 00:11:27.918 } 00:11:27.918 ] 00:11:27.918 } 00:11:27.918 } 00:11:27.918 }' 00:11:27.918 03:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.918 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:11:27.918 pt2' 00:11:27.918 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:27.918 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:27.918 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:28.177 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:28.177 "name": "pt1", 00:11:28.177 "aliases": [ 00:11:28.177 "8f840dce-2361-5c16-849f-cd7db81b8113" 00:11:28.177 ], 00:11:28.177 "product_name": "passthru", 00:11:28.177 "block_size": 512, 00:11:28.177 "num_blocks": 65536, 00:11:28.177 "uuid": "8f840dce-2361-5c16-849f-cd7db81b8113", 00:11:28.177 "assigned_rate_limits": { 00:11:28.177 "rw_ios_per_sec": 0, 00:11:28.177 "rw_mbytes_per_sec": 0, 00:11:28.177 "r_mbytes_per_sec": 0, 00:11:28.177 "w_mbytes_per_sec": 0 00:11:28.177 }, 00:11:28.177 "claimed": true, 00:11:28.177 "claim_type": "exclusive_write", 00:11:28.177 "zoned": false, 00:11:28.177 "supported_io_types": { 00:11:28.177 "read": true, 00:11:28.177 "write": true, 00:11:28.177 "unmap": true, 00:11:28.177 "write_zeroes": true, 00:11:28.177 "flush": true, 00:11:28.177 "reset": true, 00:11:28.177 "compare": false, 00:11:28.177 "compare_and_write": false, 00:11:28.177 "abort": true, 00:11:28.177 "nvme_admin": false, 00:11:28.177 "nvme_io": false 00:11:28.177 }, 00:11:28.177 "memory_domains": [ 00:11:28.177 { 00:11:28.177 "dma_device_id": "system", 00:11:28.177 "dma_device_type": 1 00:11:28.177 }, 00:11:28.177 { 00:11:28.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.177 "dma_device_type": 2 00:11:28.177 } 00:11:28.177 ], 00:11:28.177 "driver_specific": { 00:11:28.177 "passthru": { 00:11:28.177 "name": "pt1", 00:11:28.177 "base_bdev_name": "malloc1" 00:11:28.177 } 00:11:28.177 } 00:11:28.177 }' 00:11:28.177 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:28.177 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:28.436 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:28.697 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:28.697 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:28.697 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:28.697 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:28.957 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:28.957 "name": "pt2", 00:11:28.957 "aliases": [ 00:11:28.957 "fb69b2bc-1062-5e81-83d4-7681c7660844" 00:11:28.957 ], 00:11:28.957 "product_name": "passthru", 00:11:28.957 "block_size": 512, 00:11:28.957 "num_blocks": 65536, 00:11:28.957 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:28.957 "assigned_rate_limits": { 00:11:28.957 "rw_ios_per_sec": 0, 00:11:28.957 "rw_mbytes_per_sec": 0, 00:11:28.957 "r_mbytes_per_sec": 0, 00:11:28.957 "w_mbytes_per_sec": 0 00:11:28.957 }, 00:11:28.957 "claimed": true, 00:11:28.957 "claim_type": "exclusive_write", 00:11:28.957 "zoned": false, 00:11:28.957 "supported_io_types": { 00:11:28.957 "read": true, 00:11:28.957 "write": true, 00:11:28.957 "unmap": true, 00:11:28.957 "write_zeroes": true, 00:11:28.957 "flush": true, 00:11:28.957 "reset": true, 00:11:28.957 "compare": false, 00:11:28.957 "compare_and_write": false, 00:11:28.957 "abort": true, 00:11:28.957 "nvme_admin": false, 00:11:28.957 "nvme_io": false 00:11:28.957 }, 00:11:28.957 "memory_domains": [ 00:11:28.957 { 00:11:28.957 "dma_device_id": "system", 00:11:28.957 "dma_device_type": 1 00:11:28.957 }, 00:11:28.957 { 00:11:28.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.957 "dma_device_type": 2 00:11:28.957 } 00:11:28.957 ], 00:11:28.957 "driver_specific": { 00:11:28.957 "passthru": { 00:11:28.957 "name": "pt2", 00:11:28.957 "base_bdev_name": "malloc2" 00:11:28.957 } 00:11:28.957 } 00:11:28.957 }' 00:11:28.957 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:28.957 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:28.957 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:28.957 03:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:28.957 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:28.957 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.957 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:28.957 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:29.216 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:29.475 [2024-05-15 03:06:00.477318] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.475 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 52f64011-aebf-4a96-94a5-649dee4d5993 '!=' 52f64011-aebf-4a96-94a5-649dee4d5993 ']' 00:11:29.475 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:29.475 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:29.475 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:11:29.475 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:29.734 [2024-05-15 03:06:00.733767] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.734 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.994 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.994 "name": "raid_bdev1", 00:11:29.994 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:29.994 "strip_size_kb": 0, 00:11:29.994 "state": "online", 00:11:29.994 "raid_level": "raid1", 00:11:29.994 "superblock": true, 00:11:29.994 "num_base_bdevs": 2, 00:11:29.994 "num_base_bdevs_discovered": 1, 00:11:29.994 "num_base_bdevs_operational": 1, 00:11:29.994 "base_bdevs_list": [ 00:11:29.994 { 00:11:29.994 "name": null, 00:11:29.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.994 "is_configured": false, 00:11:29.994 "data_offset": 2048, 00:11:29.994 "data_size": 63488 00:11:29.994 }, 00:11:29.994 { 00:11:29.994 "name": "pt2", 00:11:29.994 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:29.994 "is_configured": true, 00:11:29.994 "data_offset": 2048, 00:11:29.994 "data_size": 63488 00:11:29.994 } 00:11:29.994 ] 00:11:29.994 }' 00:11:29.994 03:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.994 03:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.591 03:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:30.850 [2024-05-15 03:06:01.848721] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.850 [2024-05-15 03:06:01.848748] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.850 [2024-05-15 03:06:01.848804] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.850 [2024-05-15 03:06:01.848845] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.850 [2024-05-15 03:06:01.848867] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfa5cc0 name raid_bdev1, state offline 00:11:30.850 03:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.850 03:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:31.109 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:31.109 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:31.109 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:31.109 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:31.109 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:31.368 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.627 [2024-05-15 03:06:02.606711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.627 [2024-05-15 03:06:02.606756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.627 [2024-05-15 03:06:02.606778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdfde60 00:11:31.627 [2024-05-15 03:06:02.606789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.627 [2024-05-15 03:06:02.608519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.627 [2024-05-15 03:06:02.608549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.627 [2024-05-15 03:06:02.608618] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:31.627 [2024-05-15 03:06:02.608642] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:31.627 [2024-05-15 03:06:02.608723] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xdf6240 00:11:31.627 [2024-05-15 03:06:02.608732] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.627 [2024-05-15 03:06:02.608919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdf48d0 00:11:31.627 [2024-05-15 03:06:02.609048] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdf6240 00:11:31.627 [2024-05-15 03:06:02.609056] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdf6240 00:11:31.627 [2024-05-15 03:06:02.609163] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.627 pt2 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.627 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.886 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:31.886 "name": "raid_bdev1", 00:11:31.886 "uuid": "52f64011-aebf-4a96-94a5-649dee4d5993", 00:11:31.886 "strip_size_kb": 0, 00:11:31.886 "state": "online", 00:11:31.886 "raid_level": "raid1", 00:11:31.886 "superblock": true, 00:11:31.886 "num_base_bdevs": 2, 00:11:31.886 "num_base_bdevs_discovered": 1, 00:11:31.886 "num_base_bdevs_operational": 1, 00:11:31.886 "base_bdevs_list": [ 00:11:31.886 { 00:11:31.886 "name": null, 00:11:31.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.886 "is_configured": false, 00:11:31.886 "data_offset": 2048, 00:11:31.886 "data_size": 63488 00:11:31.886 }, 00:11:31.886 { 00:11:31.886 "name": "pt2", 00:11:31.886 "uuid": "fb69b2bc-1062-5e81-83d4-7681c7660844", 00:11:31.886 "is_configured": true, 00:11:31.886 "data_offset": 2048, 00:11:31.886 "data_size": 63488 00:11:31.886 } 00:11:31.886 ] 00:11:31.886 }' 00:11:31.886 03:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:31.886 03:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.459 03:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:11:32.459 03:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:32.459 03:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:11:32.720 [2024-05-15 03:06:03.729934] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 52f64011-aebf-4a96-94a5-649dee4d5993 '!=' 52f64011-aebf-4a96-94a5-649dee4d5993 ']' 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4056886 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4056886 ']' 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4056886 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4056886 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4056886' 00:11:32.720 killing process with pid 4056886 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4056886 00:11:32.720 [2024-05-15 03:06:03.798997] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.720 [2024-05-15 03:06:03.799056] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.720 [2024-05-15 03:06:03.799098] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.720 [2024-05-15 03:06:03.799107] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdf6240 name raid_bdev1, state offline 00:11:32.720 03:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4056886 00:11:32.720 [2024-05-15 03:06:03.815502] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.980 03:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:11:32.980 00:11:32.980 real 0m14.220s 00:11:32.980 user 0m26.321s 00:11:32.980 sys 0m2.007s 00:11:32.980 03:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:32.980 03:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.980 ************************************ 00:11:32.980 END TEST raid_superblock_test 00:11:32.980 ************************************ 00:11:32.980 03:06:04 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:11:32.980 03:06:04 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:11:32.980 03:06:04 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:32.980 03:06:04 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:32.980 03:06:04 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:32.980 03:06:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.980 ************************************ 00:11:32.980 START TEST raid_state_function_test 00:11:32.980 ************************************ 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4059681 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4059681' 00:11:32.980 Process raid pid: 4059681 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4059681 /var/tmp/spdk-raid.sock 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4059681 ']' 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:32.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:32.980 03:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.238 [2024-05-15 03:06:04.181868] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:33.238 [2024-05-15 03:06:04.181924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.238 [2024-05-15 03:06:04.280226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.238 [2024-05-15 03:06:04.373726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.496 [2024-05-15 03:06:04.432938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.496 [2024-05-15 03:06:04.432972] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.064 03:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:34.064 03:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:11:34.064 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.322 [2024-05-15 03:06:05.367792] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.322 [2024-05-15 03:06:05.367832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.322 [2024-05-15 03:06:05.367841] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.322 [2024-05-15 03:06:05.367858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.322 [2024-05-15 03:06:05.367865] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.322 [2024-05-15 03:06:05.367874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.322 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.581 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:34.581 "name": "Existed_Raid", 00:11:34.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.581 "strip_size_kb": 64, 00:11:34.581 "state": "configuring", 00:11:34.581 "raid_level": "raid0", 00:11:34.581 "superblock": false, 00:11:34.581 "num_base_bdevs": 3, 00:11:34.581 "num_base_bdevs_discovered": 0, 00:11:34.581 "num_base_bdevs_operational": 3, 00:11:34.581 "base_bdevs_list": [ 00:11:34.581 { 00:11:34.581 "name": "BaseBdev1", 00:11:34.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.581 "is_configured": false, 00:11:34.581 "data_offset": 0, 00:11:34.581 "data_size": 0 00:11:34.581 }, 00:11:34.581 { 00:11:34.581 "name": "BaseBdev2", 00:11:34.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.581 "is_configured": false, 00:11:34.581 "data_offset": 0, 00:11:34.581 "data_size": 0 00:11:34.581 }, 00:11:34.581 { 00:11:34.581 "name": "BaseBdev3", 00:11:34.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.581 "is_configured": false, 00:11:34.581 "data_offset": 0, 00:11:34.581 "data_size": 0 00:11:34.581 } 00:11:34.581 ] 00:11:34.581 }' 00:11:34.581 03:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:34.581 03:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.147 03:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:35.406 [2024-05-15 03:06:06.482631] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.406 [2024-05-15 03:06:06.482661] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2011de0 name Existed_Raid, state configuring 00:11:35.406 03:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:35.665 [2024-05-15 03:06:06.735322] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.665 [2024-05-15 03:06:06.735349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.665 [2024-05-15 03:06:06.735357] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.665 [2024-05-15 03:06:06.735365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.665 [2024-05-15 03:06:06.735372] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.665 [2024-05-15 03:06:06.735380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.665 03:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.923 [2024-05-15 03:06:06.997491] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.923 BaseBdev1 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:35.923 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:36.182 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.441 [ 00:11:36.441 { 00:11:36.441 "name": "BaseBdev1", 00:11:36.441 "aliases": [ 00:11:36.441 "a84e1e93-020a-47e8-ac5a-7c4858fb4289" 00:11:36.441 ], 00:11:36.441 "product_name": "Malloc disk", 00:11:36.441 "block_size": 512, 00:11:36.441 "num_blocks": 65536, 00:11:36.441 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:36.441 "assigned_rate_limits": { 00:11:36.441 "rw_ios_per_sec": 0, 00:11:36.441 "rw_mbytes_per_sec": 0, 00:11:36.441 "r_mbytes_per_sec": 0, 00:11:36.441 "w_mbytes_per_sec": 0 00:11:36.441 }, 00:11:36.441 "claimed": true, 00:11:36.441 "claim_type": "exclusive_write", 00:11:36.441 "zoned": false, 00:11:36.441 "supported_io_types": { 00:11:36.441 "read": true, 00:11:36.441 "write": true, 00:11:36.441 "unmap": true, 00:11:36.441 "write_zeroes": true, 00:11:36.441 "flush": true, 00:11:36.441 "reset": true, 00:11:36.441 "compare": false, 00:11:36.441 "compare_and_write": false, 00:11:36.441 "abort": true, 00:11:36.441 "nvme_admin": false, 00:11:36.441 "nvme_io": false 00:11:36.441 }, 00:11:36.441 "memory_domains": [ 00:11:36.441 { 00:11:36.441 "dma_device_id": "system", 00:11:36.441 "dma_device_type": 1 00:11:36.441 }, 00:11:36.441 { 00:11:36.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.441 "dma_device_type": 2 00:11:36.441 } 00:11:36.441 ], 00:11:36.441 "driver_specific": {} 00:11:36.441 } 00:11:36.441 ] 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.441 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.700 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:36.700 "name": "Existed_Raid", 00:11:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.700 "strip_size_kb": 64, 00:11:36.700 "state": "configuring", 00:11:36.700 "raid_level": "raid0", 00:11:36.700 "superblock": false, 00:11:36.700 "num_base_bdevs": 3, 00:11:36.700 "num_base_bdevs_discovered": 1, 00:11:36.700 "num_base_bdevs_operational": 3, 00:11:36.700 "base_bdevs_list": [ 00:11:36.700 { 00:11:36.700 "name": "BaseBdev1", 00:11:36.700 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:36.700 "is_configured": true, 00:11:36.700 "data_offset": 0, 00:11:36.700 "data_size": 65536 00:11:36.700 }, 00:11:36.700 { 00:11:36.700 "name": "BaseBdev2", 00:11:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.700 "is_configured": false, 00:11:36.700 "data_offset": 0, 00:11:36.700 "data_size": 0 00:11:36.700 }, 00:11:36.700 { 00:11:36.700 "name": "BaseBdev3", 00:11:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.700 "is_configured": false, 00:11:36.700 "data_offset": 0, 00:11:36.700 "data_size": 0 00:11:36.700 } 00:11:36.700 ] 00:11:36.700 }' 00:11:36.700 03:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:36.700 03:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.267 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:37.525 [2024-05-15 03:06:08.637877] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.525 [2024-05-15 03:06:08.637916] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20116b0 name Existed_Raid, state configuring 00:11:37.525 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:37.784 [2024-05-15 03:06:08.894581] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.784 [2024-05-15 03:06:08.896098] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.784 [2024-05-15 03:06:08.896127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.784 [2024-05-15 03:06:08.896136] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.784 [2024-05-15 03:06:08.896144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.784 03:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.042 03:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:38.042 "name": "Existed_Raid", 00:11:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.042 "strip_size_kb": 64, 00:11:38.042 "state": "configuring", 00:11:38.042 "raid_level": "raid0", 00:11:38.042 "superblock": false, 00:11:38.042 "num_base_bdevs": 3, 00:11:38.042 "num_base_bdevs_discovered": 1, 00:11:38.042 "num_base_bdevs_operational": 3, 00:11:38.042 "base_bdevs_list": [ 00:11:38.042 { 00:11:38.042 "name": "BaseBdev1", 00:11:38.042 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:38.042 "is_configured": true, 00:11:38.042 "data_offset": 0, 00:11:38.042 "data_size": 65536 00:11:38.042 }, 00:11:38.042 { 00:11:38.042 "name": "BaseBdev2", 00:11:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.042 "is_configured": false, 00:11:38.042 "data_offset": 0, 00:11:38.042 "data_size": 0 00:11:38.042 }, 00:11:38.042 { 00:11:38.042 "name": "BaseBdev3", 00:11:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.042 "is_configured": false, 00:11:38.042 "data_offset": 0, 00:11:38.042 "data_size": 0 00:11:38.042 } 00:11:38.042 ] 00:11:38.042 }' 00:11:38.042 03:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:38.042 03:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.977 03:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.977 [2024-05-15 03:06:10.032938] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.977 BaseBdev2 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:38.977 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.235 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.494 [ 00:11:39.494 { 00:11:39.494 "name": "BaseBdev2", 00:11:39.494 "aliases": [ 00:11:39.494 "47acfe7a-8db1-4242-b078-c3c0491487f9" 00:11:39.494 ], 00:11:39.494 "product_name": "Malloc disk", 00:11:39.494 "block_size": 512, 00:11:39.494 "num_blocks": 65536, 00:11:39.494 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:39.494 "assigned_rate_limits": { 00:11:39.494 "rw_ios_per_sec": 0, 00:11:39.494 "rw_mbytes_per_sec": 0, 00:11:39.494 "r_mbytes_per_sec": 0, 00:11:39.494 "w_mbytes_per_sec": 0 00:11:39.494 }, 00:11:39.494 "claimed": true, 00:11:39.494 "claim_type": "exclusive_write", 00:11:39.494 "zoned": false, 00:11:39.494 "supported_io_types": { 00:11:39.494 "read": true, 00:11:39.494 "write": true, 00:11:39.494 "unmap": true, 00:11:39.494 "write_zeroes": true, 00:11:39.494 "flush": true, 00:11:39.494 "reset": true, 00:11:39.494 "compare": false, 00:11:39.494 "compare_and_write": false, 00:11:39.494 "abort": true, 00:11:39.494 "nvme_admin": false, 00:11:39.494 "nvme_io": false 00:11:39.494 }, 00:11:39.494 "memory_domains": [ 00:11:39.494 { 00:11:39.494 "dma_device_id": "system", 00:11:39.494 "dma_device_type": 1 00:11:39.494 }, 00:11:39.494 { 00:11:39.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.494 "dma_device_type": 2 00:11:39.494 } 00:11:39.494 ], 00:11:39.494 "driver_specific": {} 00:11:39.494 } 00:11:39.494 ] 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.494 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.753 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:39.753 "name": "Existed_Raid", 00:11:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.753 "strip_size_kb": 64, 00:11:39.753 "state": "configuring", 00:11:39.753 "raid_level": "raid0", 00:11:39.753 "superblock": false, 00:11:39.753 "num_base_bdevs": 3, 00:11:39.753 "num_base_bdevs_discovered": 2, 00:11:39.753 "num_base_bdevs_operational": 3, 00:11:39.753 "base_bdevs_list": [ 00:11:39.753 { 00:11:39.753 "name": "BaseBdev1", 00:11:39.753 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:39.753 "is_configured": true, 00:11:39.753 "data_offset": 0, 00:11:39.753 "data_size": 65536 00:11:39.753 }, 00:11:39.753 { 00:11:39.753 "name": "BaseBdev2", 00:11:39.753 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:39.753 "is_configured": true, 00:11:39.753 "data_offset": 0, 00:11:39.753 "data_size": 65536 00:11:39.753 }, 00:11:39.753 { 00:11:39.753 "name": "BaseBdev3", 00:11:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.753 "is_configured": false, 00:11:39.753 "data_offset": 0, 00:11:39.753 "data_size": 0 00:11:39.753 } 00:11:39.753 ] 00:11:39.753 }' 00:11:39.753 03:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:39.753 03:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.321 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.578 [2024-05-15 03:06:11.584261] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.578 [2024-05-15 03:06:11.584297] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x2012760 00:11:40.578 [2024-05-15 03:06:11.584304] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:40.578 [2024-05-15 03:06:11.584502] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2029690 00:11:40.578 [2024-05-15 03:06:11.584630] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2012760 00:11:40.578 [2024-05-15 03:06:11.584638] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2012760 00:11:40.578 [2024-05-15 03:06:11.584801] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.578 BaseBdev3 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:40.578 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.837 [ 00:11:40.837 { 00:11:40.837 "name": "BaseBdev3", 00:11:40.837 "aliases": [ 00:11:40.837 "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96" 00:11:40.837 ], 00:11:40.837 "product_name": "Malloc disk", 00:11:40.837 "block_size": 512, 00:11:40.837 "num_blocks": 65536, 00:11:40.837 "uuid": "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96", 00:11:40.837 "assigned_rate_limits": { 00:11:40.837 "rw_ios_per_sec": 0, 00:11:40.837 "rw_mbytes_per_sec": 0, 00:11:40.837 "r_mbytes_per_sec": 0, 00:11:40.837 "w_mbytes_per_sec": 0 00:11:40.837 }, 00:11:40.837 "claimed": true, 00:11:40.837 "claim_type": "exclusive_write", 00:11:40.837 "zoned": false, 00:11:40.837 "supported_io_types": { 00:11:40.837 "read": true, 00:11:40.837 "write": true, 00:11:40.837 "unmap": true, 00:11:40.837 "write_zeroes": true, 00:11:40.837 "flush": true, 00:11:40.837 "reset": true, 00:11:40.837 "compare": false, 00:11:40.837 "compare_and_write": false, 00:11:40.837 "abort": true, 00:11:40.837 "nvme_admin": false, 00:11:40.837 "nvme_io": false 00:11:40.837 }, 00:11:40.837 "memory_domains": [ 00:11:40.837 { 00:11:40.837 "dma_device_id": "system", 00:11:40.837 "dma_device_type": 1 00:11:40.837 }, 00:11:40.837 { 00:11:40.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.837 "dma_device_type": 2 00:11:40.837 } 00:11:40.837 ], 00:11:40.837 "driver_specific": {} 00:11:40.837 } 00:11:40.837 ] 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.837 03:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.096 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:41.096 "name": "Existed_Raid", 00:11:41.096 "uuid": "a6c7cd81-7a3b-4a9f-83f6-ca253e711289", 00:11:41.096 "strip_size_kb": 64, 00:11:41.096 "state": "online", 00:11:41.096 "raid_level": "raid0", 00:11:41.096 "superblock": false, 00:11:41.096 "num_base_bdevs": 3, 00:11:41.096 "num_base_bdevs_discovered": 3, 00:11:41.096 "num_base_bdevs_operational": 3, 00:11:41.096 "base_bdevs_list": [ 00:11:41.096 { 00:11:41.096 "name": "BaseBdev1", 00:11:41.096 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:41.096 "is_configured": true, 00:11:41.096 "data_offset": 0, 00:11:41.096 "data_size": 65536 00:11:41.096 }, 00:11:41.096 { 00:11:41.096 "name": "BaseBdev2", 00:11:41.096 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:41.096 "is_configured": true, 00:11:41.096 "data_offset": 0, 00:11:41.096 "data_size": 65536 00:11:41.096 }, 00:11:41.096 { 00:11:41.096 "name": "BaseBdev3", 00:11:41.096 "uuid": "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96", 00:11:41.096 "is_configured": true, 00:11:41.096 "data_offset": 0, 00:11:41.096 "data_size": 65536 00:11:41.096 } 00:11:41.096 ] 00:11:41.096 }' 00:11:41.096 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:41.096 03:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:41.663 03:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:41.921 [2024-05-15 03:06:12.976280] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.921 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:41.921 "name": "Existed_Raid", 00:11:41.921 "aliases": [ 00:11:41.921 "a6c7cd81-7a3b-4a9f-83f6-ca253e711289" 00:11:41.921 ], 00:11:41.921 "product_name": "Raid Volume", 00:11:41.921 "block_size": 512, 00:11:41.921 "num_blocks": 196608, 00:11:41.921 "uuid": "a6c7cd81-7a3b-4a9f-83f6-ca253e711289", 00:11:41.921 "assigned_rate_limits": { 00:11:41.921 "rw_ios_per_sec": 0, 00:11:41.921 "rw_mbytes_per_sec": 0, 00:11:41.921 "r_mbytes_per_sec": 0, 00:11:41.921 "w_mbytes_per_sec": 0 00:11:41.921 }, 00:11:41.921 "claimed": false, 00:11:41.921 "zoned": false, 00:11:41.921 "supported_io_types": { 00:11:41.921 "read": true, 00:11:41.921 "write": true, 00:11:41.921 "unmap": true, 00:11:41.921 "write_zeroes": true, 00:11:41.921 "flush": true, 00:11:41.921 "reset": true, 00:11:41.921 "compare": false, 00:11:41.921 "compare_and_write": false, 00:11:41.921 "abort": false, 00:11:41.921 "nvme_admin": false, 00:11:41.921 "nvme_io": false 00:11:41.921 }, 00:11:41.921 "memory_domains": [ 00:11:41.921 { 00:11:41.921 "dma_device_id": "system", 00:11:41.921 "dma_device_type": 1 00:11:41.921 }, 00:11:41.921 { 00:11:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.921 "dma_device_type": 2 00:11:41.921 }, 00:11:41.921 { 00:11:41.921 "dma_device_id": "system", 00:11:41.921 "dma_device_type": 1 00:11:41.921 }, 00:11:41.921 { 00:11:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.921 "dma_device_type": 2 00:11:41.921 }, 00:11:41.921 { 00:11:41.921 "dma_device_id": "system", 00:11:41.921 "dma_device_type": 1 00:11:41.921 }, 00:11:41.921 { 00:11:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.921 "dma_device_type": 2 00:11:41.921 } 00:11:41.921 ], 00:11:41.921 "driver_specific": { 00:11:41.921 "raid": { 00:11:41.921 "uuid": "a6c7cd81-7a3b-4a9f-83f6-ca253e711289", 00:11:41.921 "strip_size_kb": 64, 00:11:41.921 "state": "online", 00:11:41.921 "raid_level": "raid0", 00:11:41.921 "superblock": false, 00:11:41.922 "num_base_bdevs": 3, 00:11:41.922 "num_base_bdevs_discovered": 3, 00:11:41.922 "num_base_bdevs_operational": 3, 00:11:41.922 "base_bdevs_list": [ 00:11:41.922 { 00:11:41.922 "name": "BaseBdev1", 00:11:41.922 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:41.922 "is_configured": true, 00:11:41.922 "data_offset": 0, 00:11:41.922 "data_size": 65536 00:11:41.922 }, 00:11:41.922 { 00:11:41.922 "name": "BaseBdev2", 00:11:41.922 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:41.922 "is_configured": true, 00:11:41.922 "data_offset": 0, 00:11:41.922 "data_size": 65536 00:11:41.922 }, 00:11:41.922 { 00:11:41.922 "name": "BaseBdev3", 00:11:41.922 "uuid": "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96", 00:11:41.922 "is_configured": true, 00:11:41.922 "data_offset": 0, 00:11:41.922 "data_size": 65536 00:11:41.922 } 00:11:41.922 ] 00:11:41.922 } 00:11:41.922 } 00:11:41.922 }' 00:11:41.922 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.922 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:11:41.922 BaseBdev2 00:11:41.922 BaseBdev3' 00:11:41.922 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:41.922 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:41.922 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:42.180 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:42.180 "name": "BaseBdev1", 00:11:42.180 "aliases": [ 00:11:42.180 "a84e1e93-020a-47e8-ac5a-7c4858fb4289" 00:11:42.180 ], 00:11:42.180 "product_name": "Malloc disk", 00:11:42.180 "block_size": 512, 00:11:42.180 "num_blocks": 65536, 00:11:42.180 "uuid": "a84e1e93-020a-47e8-ac5a-7c4858fb4289", 00:11:42.180 "assigned_rate_limits": { 00:11:42.180 "rw_ios_per_sec": 0, 00:11:42.180 "rw_mbytes_per_sec": 0, 00:11:42.180 "r_mbytes_per_sec": 0, 00:11:42.180 "w_mbytes_per_sec": 0 00:11:42.180 }, 00:11:42.180 "claimed": true, 00:11:42.180 "claim_type": "exclusive_write", 00:11:42.180 "zoned": false, 00:11:42.180 "supported_io_types": { 00:11:42.180 "read": true, 00:11:42.180 "write": true, 00:11:42.180 "unmap": true, 00:11:42.180 "write_zeroes": true, 00:11:42.180 "flush": true, 00:11:42.180 "reset": true, 00:11:42.180 "compare": false, 00:11:42.180 "compare_and_write": false, 00:11:42.180 "abort": true, 00:11:42.180 "nvme_admin": false, 00:11:42.180 "nvme_io": false 00:11:42.180 }, 00:11:42.181 "memory_domains": [ 00:11:42.181 { 00:11:42.181 "dma_device_id": "system", 00:11:42.181 "dma_device_type": 1 00:11:42.181 }, 00:11:42.181 { 00:11:42.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.181 "dma_device_type": 2 00:11:42.181 } 00:11:42.181 ], 00:11:42.181 "driver_specific": {} 00:11:42.181 }' 00:11:42.181 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:42.455 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:42.456 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:42.456 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:42.456 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:42.717 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:42.717 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:42.717 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:42.717 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:42.717 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:42.977 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:42.977 "name": "BaseBdev2", 00:11:42.977 "aliases": [ 00:11:42.977 "47acfe7a-8db1-4242-b078-c3c0491487f9" 00:11:42.977 ], 00:11:42.977 "product_name": "Malloc disk", 00:11:42.977 "block_size": 512, 00:11:42.977 "num_blocks": 65536, 00:11:42.977 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:42.977 "assigned_rate_limits": { 00:11:42.977 "rw_ios_per_sec": 0, 00:11:42.977 "rw_mbytes_per_sec": 0, 00:11:42.977 "r_mbytes_per_sec": 0, 00:11:42.977 "w_mbytes_per_sec": 0 00:11:42.977 }, 00:11:42.977 "claimed": true, 00:11:42.977 "claim_type": "exclusive_write", 00:11:42.977 "zoned": false, 00:11:42.977 "supported_io_types": { 00:11:42.977 "read": true, 00:11:42.977 "write": true, 00:11:42.977 "unmap": true, 00:11:42.977 "write_zeroes": true, 00:11:42.977 "flush": true, 00:11:42.977 "reset": true, 00:11:42.977 "compare": false, 00:11:42.977 "compare_and_write": false, 00:11:42.977 "abort": true, 00:11:42.977 "nvme_admin": false, 00:11:42.977 "nvme_io": false 00:11:42.977 }, 00:11:42.977 "memory_domains": [ 00:11:42.977 { 00:11:42.977 "dma_device_id": "system", 00:11:42.977 "dma_device_type": 1 00:11:42.977 }, 00:11:42.977 { 00:11:42.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.977 "dma_device_type": 2 00:11:42.977 } 00:11:42.977 ], 00:11:42.977 "driver_specific": {} 00:11:42.977 }' 00:11:42.977 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:42.977 03:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:42.977 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:42.977 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:42.977 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:42.977 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:42.977 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:43.242 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:43.243 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:43.503 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:43.503 "name": "BaseBdev3", 00:11:43.503 "aliases": [ 00:11:43.503 "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96" 00:11:43.503 ], 00:11:43.503 "product_name": "Malloc disk", 00:11:43.503 "block_size": 512, 00:11:43.503 "num_blocks": 65536, 00:11:43.503 "uuid": "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96", 00:11:43.503 "assigned_rate_limits": { 00:11:43.503 "rw_ios_per_sec": 0, 00:11:43.503 "rw_mbytes_per_sec": 0, 00:11:43.503 "r_mbytes_per_sec": 0, 00:11:43.503 "w_mbytes_per_sec": 0 00:11:43.503 }, 00:11:43.503 "claimed": true, 00:11:43.503 "claim_type": "exclusive_write", 00:11:43.503 "zoned": false, 00:11:43.503 "supported_io_types": { 00:11:43.503 "read": true, 00:11:43.503 "write": true, 00:11:43.503 "unmap": true, 00:11:43.503 "write_zeroes": true, 00:11:43.503 "flush": true, 00:11:43.503 "reset": true, 00:11:43.503 "compare": false, 00:11:43.503 "compare_and_write": false, 00:11:43.503 "abort": true, 00:11:43.503 "nvme_admin": false, 00:11:43.503 "nvme_io": false 00:11:43.503 }, 00:11:43.503 "memory_domains": [ 00:11:43.503 { 00:11:43.503 "dma_device_id": "system", 00:11:43.503 "dma_device_type": 1 00:11:43.503 }, 00:11:43.503 { 00:11:43.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.503 "dma_device_type": 2 00:11:43.503 } 00:11:43.503 ], 00:11:43.503 "driver_specific": {} 00:11:43.503 }' 00:11:43.503 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:43.503 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:43.503 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:43.503 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:43.762 03:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:44.021 [2024-05-15 03:06:15.129821] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.021 [2024-05-15 03:06:15.129846] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.021 [2024-05-15 03:06:15.129899] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.021 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.280 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:44.280 "name": "Existed_Raid", 00:11:44.280 "uuid": "a6c7cd81-7a3b-4a9f-83f6-ca253e711289", 00:11:44.280 "strip_size_kb": 64, 00:11:44.280 "state": "offline", 00:11:44.280 "raid_level": "raid0", 00:11:44.280 "superblock": false, 00:11:44.280 "num_base_bdevs": 3, 00:11:44.280 "num_base_bdevs_discovered": 2, 00:11:44.280 "num_base_bdevs_operational": 2, 00:11:44.280 "base_bdevs_list": [ 00:11:44.280 { 00:11:44.280 "name": null, 00:11:44.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.280 "is_configured": false, 00:11:44.280 "data_offset": 0, 00:11:44.280 "data_size": 65536 00:11:44.280 }, 00:11:44.280 { 00:11:44.280 "name": "BaseBdev2", 00:11:44.280 "uuid": "47acfe7a-8db1-4242-b078-c3c0491487f9", 00:11:44.280 "is_configured": true, 00:11:44.280 "data_offset": 0, 00:11:44.280 "data_size": 65536 00:11:44.280 }, 00:11:44.280 { 00:11:44.280 "name": "BaseBdev3", 00:11:44.280 "uuid": "c5e2c858-c3ab-42fc-bffe-f8647c8d1e96", 00:11:44.280 "is_configured": true, 00:11:44.280 "data_offset": 0, 00:11:44.280 "data_size": 65536 00:11:44.280 } 00:11:44.280 ] 00:11:44.280 }' 00:11:44.280 03:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:44.280 03:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.883 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.883 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.883 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.883 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:45.142 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:45.142 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.142 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:45.401 [2024-05-15 03:06:16.522734] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.401 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.401 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.401 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.401 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:45.660 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:45.660 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.660 03:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:45.919 [2024-05-15 03:06:17.026424] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.919 [2024-05-15 03:06:17.026461] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2012760 name Existed_Raid, state offline 00:11:45.919 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.919 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.919 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.919 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:46.178 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.437 BaseBdev2 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:46.437 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.696 03:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.954 [ 00:11:46.954 { 00:11:46.954 "name": "BaseBdev2", 00:11:46.954 "aliases": [ 00:11:46.955 "dbca904e-b3ce-45be-a3f4-67096f8fcf5a" 00:11:46.955 ], 00:11:46.955 "product_name": "Malloc disk", 00:11:46.955 "block_size": 512, 00:11:46.955 "num_blocks": 65536, 00:11:46.955 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:46.955 "assigned_rate_limits": { 00:11:46.955 "rw_ios_per_sec": 0, 00:11:46.955 "rw_mbytes_per_sec": 0, 00:11:46.955 "r_mbytes_per_sec": 0, 00:11:46.955 "w_mbytes_per_sec": 0 00:11:46.955 }, 00:11:46.955 "claimed": false, 00:11:46.955 "zoned": false, 00:11:46.955 "supported_io_types": { 00:11:46.955 "read": true, 00:11:46.955 "write": true, 00:11:46.955 "unmap": true, 00:11:46.955 "write_zeroes": true, 00:11:46.955 "flush": true, 00:11:46.955 "reset": true, 00:11:46.955 "compare": false, 00:11:46.955 "compare_and_write": false, 00:11:46.955 "abort": true, 00:11:46.955 "nvme_admin": false, 00:11:46.955 "nvme_io": false 00:11:46.955 }, 00:11:46.955 "memory_domains": [ 00:11:46.955 { 00:11:46.955 "dma_device_id": "system", 00:11:46.955 "dma_device_type": 1 00:11:46.955 }, 00:11:46.955 { 00:11:46.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.955 "dma_device_type": 2 00:11:46.955 } 00:11:46.955 ], 00:11:46.955 "driver_specific": {} 00:11:46.955 } 00:11:46.955 ] 00:11:46.955 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:46.955 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:46.955 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:46.955 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.214 BaseBdev3 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:47.214 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:47.473 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.732 [ 00:11:47.732 { 00:11:47.732 "name": "BaseBdev3", 00:11:47.732 "aliases": [ 00:11:47.732 "6476f825-3606-4f33-8c1c-cf5c5a0a9258" 00:11:47.732 ], 00:11:47.732 "product_name": "Malloc disk", 00:11:47.732 "block_size": 512, 00:11:47.732 "num_blocks": 65536, 00:11:47.732 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:47.732 "assigned_rate_limits": { 00:11:47.732 "rw_ios_per_sec": 0, 00:11:47.732 "rw_mbytes_per_sec": 0, 00:11:47.732 "r_mbytes_per_sec": 0, 00:11:47.732 "w_mbytes_per_sec": 0 00:11:47.732 }, 00:11:47.732 "claimed": false, 00:11:47.732 "zoned": false, 00:11:47.732 "supported_io_types": { 00:11:47.732 "read": true, 00:11:47.732 "write": true, 00:11:47.732 "unmap": true, 00:11:47.732 "write_zeroes": true, 00:11:47.732 "flush": true, 00:11:47.732 "reset": true, 00:11:47.732 "compare": false, 00:11:47.732 "compare_and_write": false, 00:11:47.732 "abort": true, 00:11:47.732 "nvme_admin": false, 00:11:47.732 "nvme_io": false 00:11:47.732 }, 00:11:47.732 "memory_domains": [ 00:11:47.732 { 00:11:47.732 "dma_device_id": "system", 00:11:47.732 "dma_device_type": 1 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.732 "dma_device_type": 2 00:11:47.732 } 00:11:47.732 ], 00:11:47.732 "driver_specific": {} 00:11:47.732 } 00:11:47.732 ] 00:11:47.732 03:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:47.732 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:47.732 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:47.732 03:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:47.991 [2024-05-15 03:06:19.054958] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.991 [2024-05-15 03:06:19.054995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.991 [2024-05-15 03:06:19.055013] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.991 [2024-05-15 03:06:19.056406] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.991 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.250 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:48.250 "name": "Existed_Raid", 00:11:48.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.250 "strip_size_kb": 64, 00:11:48.250 "state": "configuring", 00:11:48.250 "raid_level": "raid0", 00:11:48.250 "superblock": false, 00:11:48.250 "num_base_bdevs": 3, 00:11:48.250 "num_base_bdevs_discovered": 2, 00:11:48.250 "num_base_bdevs_operational": 3, 00:11:48.250 "base_bdevs_list": [ 00:11:48.250 { 00:11:48.250 "name": "BaseBdev1", 00:11:48.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.250 "is_configured": false, 00:11:48.250 "data_offset": 0, 00:11:48.250 "data_size": 0 00:11:48.250 }, 00:11:48.250 { 00:11:48.250 "name": "BaseBdev2", 00:11:48.250 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:48.250 "is_configured": true, 00:11:48.250 "data_offset": 0, 00:11:48.250 "data_size": 65536 00:11:48.250 }, 00:11:48.250 { 00:11:48.250 "name": "BaseBdev3", 00:11:48.250 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:48.250 "is_configured": true, 00:11:48.250 "data_offset": 0, 00:11:48.250 "data_size": 65536 00:11:48.250 } 00:11:48.250 ] 00:11:48.250 }' 00:11:48.250 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:48.250 03:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.819 03:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:49.078 [2024-05-15 03:06:20.173941] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.078 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.337 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:49.337 "name": "Existed_Raid", 00:11:49.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.337 "strip_size_kb": 64, 00:11:49.337 "state": "configuring", 00:11:49.337 "raid_level": "raid0", 00:11:49.337 "superblock": false, 00:11:49.337 "num_base_bdevs": 3, 00:11:49.337 "num_base_bdevs_discovered": 1, 00:11:49.337 "num_base_bdevs_operational": 3, 00:11:49.337 "base_bdevs_list": [ 00:11:49.337 { 00:11:49.337 "name": "BaseBdev1", 00:11:49.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.337 "is_configured": false, 00:11:49.337 "data_offset": 0, 00:11:49.337 "data_size": 0 00:11:49.337 }, 00:11:49.337 { 00:11:49.337 "name": null, 00:11:49.337 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:49.337 "is_configured": false, 00:11:49.337 "data_offset": 0, 00:11:49.337 "data_size": 65536 00:11:49.337 }, 00:11:49.337 { 00:11:49.337 "name": "BaseBdev3", 00:11:49.337 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:49.337 "is_configured": true, 00:11:49.337 "data_offset": 0, 00:11:49.337 "data_size": 65536 00:11:49.337 } 00:11:49.337 ] 00:11:49.337 }' 00:11:49.337 03:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:49.337 03:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.275 03:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.275 03:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.275 03:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:11:50.275 03:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.533 [2024-05-15 03:06:21.576980] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.533 BaseBdev1 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:50.534 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.792 03:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.051 [ 00:11:51.051 { 00:11:51.051 "name": "BaseBdev1", 00:11:51.051 "aliases": [ 00:11:51.051 "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81" 00:11:51.051 ], 00:11:51.051 "product_name": "Malloc disk", 00:11:51.051 "block_size": 512, 00:11:51.051 "num_blocks": 65536, 00:11:51.051 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:51.052 "assigned_rate_limits": { 00:11:51.052 "rw_ios_per_sec": 0, 00:11:51.052 "rw_mbytes_per_sec": 0, 00:11:51.052 "r_mbytes_per_sec": 0, 00:11:51.052 "w_mbytes_per_sec": 0 00:11:51.052 }, 00:11:51.052 "claimed": true, 00:11:51.052 "claim_type": "exclusive_write", 00:11:51.052 "zoned": false, 00:11:51.052 "supported_io_types": { 00:11:51.052 "read": true, 00:11:51.052 "write": true, 00:11:51.052 "unmap": true, 00:11:51.052 "write_zeroes": true, 00:11:51.052 "flush": true, 00:11:51.052 "reset": true, 00:11:51.052 "compare": false, 00:11:51.052 "compare_and_write": false, 00:11:51.052 "abort": true, 00:11:51.052 "nvme_admin": false, 00:11:51.052 "nvme_io": false 00:11:51.052 }, 00:11:51.052 "memory_domains": [ 00:11:51.052 { 00:11:51.052 "dma_device_id": "system", 00:11:51.052 "dma_device_type": 1 00:11:51.052 }, 00:11:51.052 { 00:11:51.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.052 "dma_device_type": 2 00:11:51.052 } 00:11:51.052 ], 00:11:51.052 "driver_specific": {} 00:11:51.052 } 00:11:51.052 ] 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.052 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.311 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:51.311 "name": "Existed_Raid", 00:11:51.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.311 "strip_size_kb": 64, 00:11:51.311 "state": "configuring", 00:11:51.311 "raid_level": "raid0", 00:11:51.311 "superblock": false, 00:11:51.311 "num_base_bdevs": 3, 00:11:51.311 "num_base_bdevs_discovered": 2, 00:11:51.311 "num_base_bdevs_operational": 3, 00:11:51.311 "base_bdevs_list": [ 00:11:51.311 { 00:11:51.311 "name": "BaseBdev1", 00:11:51.311 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:51.311 "is_configured": true, 00:11:51.311 "data_offset": 0, 00:11:51.311 "data_size": 65536 00:11:51.311 }, 00:11:51.311 { 00:11:51.311 "name": null, 00:11:51.311 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:51.311 "is_configured": false, 00:11:51.311 "data_offset": 0, 00:11:51.311 "data_size": 65536 00:11:51.311 }, 00:11:51.311 { 00:11:51.311 "name": "BaseBdev3", 00:11:51.311 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:51.311 "is_configured": true, 00:11:51.311 "data_offset": 0, 00:11:51.311 "data_size": 65536 00:11:51.311 } 00:11:51.311 ] 00:11:51.311 }' 00:11:51.311 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:51.311 03:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.894 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.894 03:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.153 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.153 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:52.415 [2024-05-15 03:06:23.462074] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.415 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.677 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.677 "name": "Existed_Raid", 00:11:52.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.677 "strip_size_kb": 64, 00:11:52.677 "state": "configuring", 00:11:52.677 "raid_level": "raid0", 00:11:52.677 "superblock": false, 00:11:52.677 "num_base_bdevs": 3, 00:11:52.677 "num_base_bdevs_discovered": 1, 00:11:52.677 "num_base_bdevs_operational": 3, 00:11:52.677 "base_bdevs_list": [ 00:11:52.677 { 00:11:52.677 "name": "BaseBdev1", 00:11:52.677 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:52.677 "is_configured": true, 00:11:52.677 "data_offset": 0, 00:11:52.677 "data_size": 65536 00:11:52.677 }, 00:11:52.677 { 00:11:52.677 "name": null, 00:11:52.677 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:52.677 "is_configured": false, 00:11:52.677 "data_offset": 0, 00:11:52.677 "data_size": 65536 00:11:52.677 }, 00:11:52.677 { 00:11:52.677 "name": null, 00:11:52.677 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:52.677 "is_configured": false, 00:11:52.677 "data_offset": 0, 00:11:52.677 "data_size": 65536 00:11:52.677 } 00:11:52.677 ] 00:11:52.677 }' 00:11:52.677 03:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.677 03:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.244 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.244 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.503 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:11:53.503 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:53.762 [2024-05-15 03:06:24.853809] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.762 03:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.021 03:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:54.021 "name": "Existed_Raid", 00:11:54.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.021 "strip_size_kb": 64, 00:11:54.021 "state": "configuring", 00:11:54.021 "raid_level": "raid0", 00:11:54.021 "superblock": false, 00:11:54.021 "num_base_bdevs": 3, 00:11:54.021 "num_base_bdevs_discovered": 2, 00:11:54.021 "num_base_bdevs_operational": 3, 00:11:54.021 "base_bdevs_list": [ 00:11:54.021 { 00:11:54.021 "name": "BaseBdev1", 00:11:54.021 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:54.021 "is_configured": true, 00:11:54.021 "data_offset": 0, 00:11:54.021 "data_size": 65536 00:11:54.021 }, 00:11:54.021 { 00:11:54.021 "name": null, 00:11:54.021 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:54.021 "is_configured": false, 00:11:54.021 "data_offset": 0, 00:11:54.021 "data_size": 65536 00:11:54.021 }, 00:11:54.021 { 00:11:54.021 "name": "BaseBdev3", 00:11:54.021 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:54.021 "is_configured": true, 00:11:54.021 "data_offset": 0, 00:11:54.021 "data_size": 65536 00:11:54.021 } 00:11:54.021 ] 00:11:54.021 }' 00:11:54.021 03:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:54.021 03:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.589 03:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.589 03:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.848 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:11:54.848 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:55.107 [2024-05-15 03:06:26.237543] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:55.365 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:55.366 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.366 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.624 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:55.624 "name": "Existed_Raid", 00:11:55.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.624 "strip_size_kb": 64, 00:11:55.624 "state": "configuring", 00:11:55.624 "raid_level": "raid0", 00:11:55.624 "superblock": false, 00:11:55.624 "num_base_bdevs": 3, 00:11:55.624 "num_base_bdevs_discovered": 1, 00:11:55.624 "num_base_bdevs_operational": 3, 00:11:55.624 "base_bdevs_list": [ 00:11:55.624 { 00:11:55.624 "name": null, 00:11:55.624 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:55.624 "is_configured": false, 00:11:55.624 "data_offset": 0, 00:11:55.624 "data_size": 65536 00:11:55.624 }, 00:11:55.624 { 00:11:55.624 "name": null, 00:11:55.624 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:55.624 "is_configured": false, 00:11:55.624 "data_offset": 0, 00:11:55.624 "data_size": 65536 00:11:55.624 }, 00:11:55.624 { 00:11:55.624 "name": "BaseBdev3", 00:11:55.624 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:55.624 "is_configured": true, 00:11:55.624 "data_offset": 0, 00:11:55.624 "data_size": 65536 00:11:55.624 } 00:11:55.624 ] 00:11:55.624 }' 00:11:55.624 03:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:55.624 03:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.191 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.191 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.450 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:11:56.450 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:56.709 [2024-05-15 03:06:27.635546] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:56.709 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:56.710 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.710 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.968 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:56.968 "name": "Existed_Raid", 00:11:56.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.968 "strip_size_kb": 64, 00:11:56.968 "state": "configuring", 00:11:56.968 "raid_level": "raid0", 00:11:56.968 "superblock": false, 00:11:56.968 "num_base_bdevs": 3, 00:11:56.968 "num_base_bdevs_discovered": 2, 00:11:56.968 "num_base_bdevs_operational": 3, 00:11:56.968 "base_bdevs_list": [ 00:11:56.968 { 00:11:56.968 "name": null, 00:11:56.968 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:56.968 "is_configured": false, 00:11:56.968 "data_offset": 0, 00:11:56.968 "data_size": 65536 00:11:56.968 }, 00:11:56.968 { 00:11:56.968 "name": "BaseBdev2", 00:11:56.968 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:56.968 "is_configured": true, 00:11:56.968 "data_offset": 0, 00:11:56.968 "data_size": 65536 00:11:56.968 }, 00:11:56.968 { 00:11:56.969 "name": "BaseBdev3", 00:11:56.969 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:56.969 "is_configured": true, 00:11:56.969 "data_offset": 0, 00:11:56.969 "data_size": 65536 00:11:56.969 } 00:11:56.969 ] 00:11:56.969 }' 00:11:56.969 03:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:56.969 03:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.537 03:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.537 03:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.796 03:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:11:57.796 03:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:57.796 03:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.055 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 91c8d4a4-dbe3-4c6a-89da-1eae70d59e81 00:11:58.318 [2024-05-15 03:06:29.283238] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.318 [2024-05-15 03:06:29.283272] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x21b61b0 00:11:58.318 [2024-05-15 03:06:29.283278] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:58.318 [2024-05-15 03:06:29.283477] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a6c70 00:11:58.318 [2024-05-15 03:06:29.283598] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21b61b0 00:11:58.318 [2024-05-15 03:06:29.283606] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21b61b0 00:11:58.318 [2024-05-15 03:06:29.283765] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.318 NewBaseBdev 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:58.318 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.581 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.850 [ 00:11:58.850 { 00:11:58.850 "name": "NewBaseBdev", 00:11:58.850 "aliases": [ 00:11:58.850 "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81" 00:11:58.850 ], 00:11:58.850 "product_name": "Malloc disk", 00:11:58.850 "block_size": 512, 00:11:58.850 "num_blocks": 65536, 00:11:58.850 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:58.850 "assigned_rate_limits": { 00:11:58.850 "rw_ios_per_sec": 0, 00:11:58.850 "rw_mbytes_per_sec": 0, 00:11:58.850 "r_mbytes_per_sec": 0, 00:11:58.850 "w_mbytes_per_sec": 0 00:11:58.850 }, 00:11:58.850 "claimed": true, 00:11:58.850 "claim_type": "exclusive_write", 00:11:58.850 "zoned": false, 00:11:58.850 "supported_io_types": { 00:11:58.850 "read": true, 00:11:58.850 "write": true, 00:11:58.850 "unmap": true, 00:11:58.850 "write_zeroes": true, 00:11:58.850 "flush": true, 00:11:58.850 "reset": true, 00:11:58.850 "compare": false, 00:11:58.850 "compare_and_write": false, 00:11:58.850 "abort": true, 00:11:58.850 "nvme_admin": false, 00:11:58.850 "nvme_io": false 00:11:58.850 }, 00:11:58.850 "memory_domains": [ 00:11:58.850 { 00:11:58.850 "dma_device_id": "system", 00:11:58.850 "dma_device_type": 1 00:11:58.850 }, 00:11:58.850 { 00:11:58.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.850 "dma_device_type": 2 00:11:58.850 } 00:11:58.850 ], 00:11:58.850 "driver_specific": {} 00:11:58.851 } 00:11:58.851 ] 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.851 03:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.153 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:59.153 "name": "Existed_Raid", 00:11:59.153 "uuid": "7e566624-1395-48e4-8c81-1c28f2ce0439", 00:11:59.153 "strip_size_kb": 64, 00:11:59.153 "state": "online", 00:11:59.153 "raid_level": "raid0", 00:11:59.153 "superblock": false, 00:11:59.153 "num_base_bdevs": 3, 00:11:59.153 "num_base_bdevs_discovered": 3, 00:11:59.153 "num_base_bdevs_operational": 3, 00:11:59.153 "base_bdevs_list": [ 00:11:59.153 { 00:11:59.153 "name": "NewBaseBdev", 00:11:59.153 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:59.153 "is_configured": true, 00:11:59.153 "data_offset": 0, 00:11:59.153 "data_size": 65536 00:11:59.153 }, 00:11:59.153 { 00:11:59.153 "name": "BaseBdev2", 00:11:59.153 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:59.153 "is_configured": true, 00:11:59.153 "data_offset": 0, 00:11:59.153 "data_size": 65536 00:11:59.153 }, 00:11:59.153 { 00:11:59.153 "name": "BaseBdev3", 00:11:59.153 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:59.153 "is_configured": true, 00:11:59.153 "data_offset": 0, 00:11:59.153 "data_size": 65536 00:11:59.153 } 00:11:59.153 ] 00:11:59.153 }' 00:11:59.153 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:59.153 03:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:59.721 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:59.979 [2024-05-15 03:06:30.891915] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.979 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:59.979 "name": "Existed_Raid", 00:11:59.979 "aliases": [ 00:11:59.979 "7e566624-1395-48e4-8c81-1c28f2ce0439" 00:11:59.979 ], 00:11:59.979 "product_name": "Raid Volume", 00:11:59.979 "block_size": 512, 00:11:59.979 "num_blocks": 196608, 00:11:59.979 "uuid": "7e566624-1395-48e4-8c81-1c28f2ce0439", 00:11:59.979 "assigned_rate_limits": { 00:11:59.979 "rw_ios_per_sec": 0, 00:11:59.979 "rw_mbytes_per_sec": 0, 00:11:59.979 "r_mbytes_per_sec": 0, 00:11:59.979 "w_mbytes_per_sec": 0 00:11:59.979 }, 00:11:59.979 "claimed": false, 00:11:59.979 "zoned": false, 00:11:59.979 "supported_io_types": { 00:11:59.980 "read": true, 00:11:59.980 "write": true, 00:11:59.980 "unmap": true, 00:11:59.980 "write_zeroes": true, 00:11:59.980 "flush": true, 00:11:59.980 "reset": true, 00:11:59.980 "compare": false, 00:11:59.980 "compare_and_write": false, 00:11:59.980 "abort": false, 00:11:59.980 "nvme_admin": false, 00:11:59.980 "nvme_io": false 00:11:59.980 }, 00:11:59.980 "memory_domains": [ 00:11:59.980 { 00:11:59.980 "dma_device_id": "system", 00:11:59.980 "dma_device_type": 1 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.980 "dma_device_type": 2 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "dma_device_id": "system", 00:11:59.980 "dma_device_type": 1 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.980 "dma_device_type": 2 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "dma_device_id": "system", 00:11:59.980 "dma_device_type": 1 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.980 "dma_device_type": 2 00:11:59.980 } 00:11:59.980 ], 00:11:59.980 "driver_specific": { 00:11:59.980 "raid": { 00:11:59.980 "uuid": "7e566624-1395-48e4-8c81-1c28f2ce0439", 00:11:59.980 "strip_size_kb": 64, 00:11:59.980 "state": "online", 00:11:59.980 "raid_level": "raid0", 00:11:59.980 "superblock": false, 00:11:59.980 "num_base_bdevs": 3, 00:11:59.980 "num_base_bdevs_discovered": 3, 00:11:59.980 "num_base_bdevs_operational": 3, 00:11:59.980 "base_bdevs_list": [ 00:11:59.980 { 00:11:59.980 "name": "NewBaseBdev", 00:11:59.980 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:11:59.980 "is_configured": true, 00:11:59.980 "data_offset": 0, 00:11:59.980 "data_size": 65536 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "name": "BaseBdev2", 00:11:59.980 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:11:59.980 "is_configured": true, 00:11:59.980 "data_offset": 0, 00:11:59.980 "data_size": 65536 00:11:59.980 }, 00:11:59.980 { 00:11:59.980 "name": "BaseBdev3", 00:11:59.980 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:11:59.980 "is_configured": true, 00:11:59.980 "data_offset": 0, 00:11:59.980 "data_size": 65536 00:11:59.980 } 00:11:59.980 ] 00:11:59.980 } 00:11:59.980 } 00:11:59.980 }' 00:11:59.980 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.980 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:11:59.980 BaseBdev2 00:11:59.980 BaseBdev3' 00:11:59.980 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:59.980 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:59.980 03:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:00.238 "name": "NewBaseBdev", 00:12:00.238 "aliases": [ 00:12:00.238 "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81" 00:12:00.238 ], 00:12:00.238 "product_name": "Malloc disk", 00:12:00.238 "block_size": 512, 00:12:00.238 "num_blocks": 65536, 00:12:00.238 "uuid": "91c8d4a4-dbe3-4c6a-89da-1eae70d59e81", 00:12:00.238 "assigned_rate_limits": { 00:12:00.238 "rw_ios_per_sec": 0, 00:12:00.238 "rw_mbytes_per_sec": 0, 00:12:00.238 "r_mbytes_per_sec": 0, 00:12:00.238 "w_mbytes_per_sec": 0 00:12:00.238 }, 00:12:00.238 "claimed": true, 00:12:00.238 "claim_type": "exclusive_write", 00:12:00.238 "zoned": false, 00:12:00.238 "supported_io_types": { 00:12:00.238 "read": true, 00:12:00.238 "write": true, 00:12:00.238 "unmap": true, 00:12:00.238 "write_zeroes": true, 00:12:00.238 "flush": true, 00:12:00.238 "reset": true, 00:12:00.238 "compare": false, 00:12:00.238 "compare_and_write": false, 00:12:00.238 "abort": true, 00:12:00.238 "nvme_admin": false, 00:12:00.238 "nvme_io": false 00:12:00.238 }, 00:12:00.238 "memory_domains": [ 00:12:00.238 { 00:12:00.238 "dma_device_id": "system", 00:12:00.238 "dma_device_type": 1 00:12:00.238 }, 00:12:00.238 { 00:12:00.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.238 "dma_device_type": 2 00:12:00.238 } 00:12:00.238 ], 00:12:00.238 "driver_specific": {} 00:12:00.238 }' 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:00.238 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:00.496 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:00.754 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:00.754 "name": "BaseBdev2", 00:12:00.754 "aliases": [ 00:12:00.754 "dbca904e-b3ce-45be-a3f4-67096f8fcf5a" 00:12:00.754 ], 00:12:00.754 "product_name": "Malloc disk", 00:12:00.754 "block_size": 512, 00:12:00.754 "num_blocks": 65536, 00:12:00.754 "uuid": "dbca904e-b3ce-45be-a3f4-67096f8fcf5a", 00:12:00.754 "assigned_rate_limits": { 00:12:00.754 "rw_ios_per_sec": 0, 00:12:00.754 "rw_mbytes_per_sec": 0, 00:12:00.754 "r_mbytes_per_sec": 0, 00:12:00.754 "w_mbytes_per_sec": 0 00:12:00.754 }, 00:12:00.754 "claimed": true, 00:12:00.754 "claim_type": "exclusive_write", 00:12:00.754 "zoned": false, 00:12:00.754 "supported_io_types": { 00:12:00.754 "read": true, 00:12:00.754 "write": true, 00:12:00.754 "unmap": true, 00:12:00.754 "write_zeroes": true, 00:12:00.754 "flush": true, 00:12:00.754 "reset": true, 00:12:00.754 "compare": false, 00:12:00.754 "compare_and_write": false, 00:12:00.754 "abort": true, 00:12:00.754 "nvme_admin": false, 00:12:00.754 "nvme_io": false 00:12:00.754 }, 00:12:00.754 "memory_domains": [ 00:12:00.754 { 00:12:00.754 "dma_device_id": "system", 00:12:00.754 "dma_device_type": 1 00:12:00.754 }, 00:12:00.754 { 00:12:00.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.754 "dma_device_type": 2 00:12:00.754 } 00:12:00.754 ], 00:12:00.754 "driver_specific": {} 00:12:00.754 }' 00:12:00.754 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:00.754 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.012 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:01.013 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.013 03:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.013 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.271 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:01.271 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:01.271 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:01.271 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:01.529 "name": "BaseBdev3", 00:12:01.529 "aliases": [ 00:12:01.529 "6476f825-3606-4f33-8c1c-cf5c5a0a9258" 00:12:01.529 ], 00:12:01.529 "product_name": "Malloc disk", 00:12:01.529 "block_size": 512, 00:12:01.529 "num_blocks": 65536, 00:12:01.529 "uuid": "6476f825-3606-4f33-8c1c-cf5c5a0a9258", 00:12:01.529 "assigned_rate_limits": { 00:12:01.529 "rw_ios_per_sec": 0, 00:12:01.529 "rw_mbytes_per_sec": 0, 00:12:01.529 "r_mbytes_per_sec": 0, 00:12:01.529 "w_mbytes_per_sec": 0 00:12:01.529 }, 00:12:01.529 "claimed": true, 00:12:01.529 "claim_type": "exclusive_write", 00:12:01.529 "zoned": false, 00:12:01.529 "supported_io_types": { 00:12:01.529 "read": true, 00:12:01.529 "write": true, 00:12:01.529 "unmap": true, 00:12:01.529 "write_zeroes": true, 00:12:01.529 "flush": true, 00:12:01.529 "reset": true, 00:12:01.529 "compare": false, 00:12:01.529 "compare_and_write": false, 00:12:01.529 "abort": true, 00:12:01.529 "nvme_admin": false, 00:12:01.529 "nvme_io": false 00:12:01.529 }, 00:12:01.529 "memory_domains": [ 00:12:01.529 { 00:12:01.529 "dma_device_id": "system", 00:12:01.529 "dma_device_type": 1 00:12:01.529 }, 00:12:01.529 { 00:12:01.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.529 "dma_device_type": 2 00:12:01.529 } 00:12:01.529 ], 00:12:01.529 "driver_specific": {} 00:12:01.529 }' 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.529 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:01.788 03:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.047 [2024-05-15 03:06:33.057401] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.047 [2024-05-15 03:06:33.057424] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.047 [2024-05-15 03:06:33.057478] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.047 [2024-05-15 03:06:33.057530] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.047 [2024-05-15 03:06:33.057539] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21b61b0 name Existed_Raid, state offline 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4059681 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4059681 ']' 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4059681 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4059681 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4059681' 00:12:02.047 killing process with pid 4059681 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4059681 00:12:02.047 [2024-05-15 03:06:33.119973] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.047 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4059681 00:12:02.047 [2024-05-15 03:06:33.145489] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:12:02.306 00:12:02.306 real 0m29.252s 00:12:02.306 user 0m54.824s 00:12:02.306 sys 0m4.131s 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.306 ************************************ 00:12:02.306 END TEST raid_state_function_test 00:12:02.306 ************************************ 00:12:02.306 03:06:33 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:02.306 03:06:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:02.306 03:06:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.306 03:06:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.306 ************************************ 00:12:02.306 START TEST raid_state_function_test_sb 00:12:02.306 ************************************ 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4065544 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4065544' 00:12:02.306 Process raid pid: 4065544 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4065544 /var/tmp/spdk-raid.sock 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4065544 ']' 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.306 03:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.565 [2024-05-15 03:06:33.511121] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:02.565 [2024-05-15 03:06:33.511173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.565 [2024-05-15 03:06:33.608463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.565 [2024-05-15 03:06:33.702090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.824 [2024-05-15 03:06:33.762982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.824 [2024-05-15 03:06:33.763013] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.392 03:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:03.392 03:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:03.392 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:03.650 [2024-05-15 03:06:34.705834] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.650 [2024-05-15 03:06:34.705880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.650 [2024-05-15 03:06:34.705890] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.650 [2024-05-15 03:06:34.705899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.650 [2024-05-15 03:06:34.705906] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.650 [2024-05-15 03:06:34.705914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.650 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.909 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:03.909 "name": "Existed_Raid", 00:12:03.909 "uuid": "0fd979da-96f5-4938-b77c-552e5505e107", 00:12:03.909 "strip_size_kb": 64, 00:12:03.909 "state": "configuring", 00:12:03.909 "raid_level": "raid0", 00:12:03.909 "superblock": true, 00:12:03.909 "num_base_bdevs": 3, 00:12:03.909 "num_base_bdevs_discovered": 0, 00:12:03.909 "num_base_bdevs_operational": 3, 00:12:03.909 "base_bdevs_list": [ 00:12:03.909 { 00:12:03.909 "name": "BaseBdev1", 00:12:03.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.909 "is_configured": false, 00:12:03.909 "data_offset": 0, 00:12:03.909 "data_size": 0 00:12:03.909 }, 00:12:03.909 { 00:12:03.909 "name": "BaseBdev2", 00:12:03.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.909 "is_configured": false, 00:12:03.909 "data_offset": 0, 00:12:03.909 "data_size": 0 00:12:03.909 }, 00:12:03.909 { 00:12:03.909 "name": "BaseBdev3", 00:12:03.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.909 "is_configured": false, 00:12:03.909 "data_offset": 0, 00:12:03.909 "data_size": 0 00:12:03.909 } 00:12:03.909 ] 00:12:03.909 }' 00:12:03.909 03:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:03.909 03:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.476 03:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:04.734 [2024-05-15 03:06:35.836696] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.734 [2024-05-15 03:06:35.836724] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1428de0 name Existed_Raid, state configuring 00:12:04.734 03:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:04.993 [2024-05-15 03:06:36.093399] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.993 [2024-05-15 03:06:36.093425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.993 [2024-05-15 03:06:36.093433] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.993 [2024-05-15 03:06:36.093442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.993 [2024-05-15 03:06:36.093449] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.993 [2024-05-15 03:06:36.093457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.993 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.251 [2024-05-15 03:06:36.359571] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.251 BaseBdev1 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:05.251 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.509 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.767 [ 00:12:05.767 { 00:12:05.767 "name": "BaseBdev1", 00:12:05.767 "aliases": [ 00:12:05.767 "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f" 00:12:05.767 ], 00:12:05.767 "product_name": "Malloc disk", 00:12:05.767 "block_size": 512, 00:12:05.767 "num_blocks": 65536, 00:12:05.767 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:05.767 "assigned_rate_limits": { 00:12:05.767 "rw_ios_per_sec": 0, 00:12:05.767 "rw_mbytes_per_sec": 0, 00:12:05.767 "r_mbytes_per_sec": 0, 00:12:05.767 "w_mbytes_per_sec": 0 00:12:05.767 }, 00:12:05.767 "claimed": true, 00:12:05.767 "claim_type": "exclusive_write", 00:12:05.767 "zoned": false, 00:12:05.767 "supported_io_types": { 00:12:05.767 "read": true, 00:12:05.767 "write": true, 00:12:05.767 "unmap": true, 00:12:05.767 "write_zeroes": true, 00:12:05.767 "flush": true, 00:12:05.767 "reset": true, 00:12:05.767 "compare": false, 00:12:05.767 "compare_and_write": false, 00:12:05.767 "abort": true, 00:12:05.767 "nvme_admin": false, 00:12:05.767 "nvme_io": false 00:12:05.767 }, 00:12:05.767 "memory_domains": [ 00:12:05.767 { 00:12:05.767 "dma_device_id": "system", 00:12:05.768 "dma_device_type": 1 00:12:05.768 }, 00:12:05.768 { 00:12:05.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.768 "dma_device_type": 2 00:12:05.768 } 00:12:05.768 ], 00:12:05.768 "driver_specific": {} 00:12:05.768 } 00:12:05.768 ] 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.768 03:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.026 03:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:06.026 "name": "Existed_Raid", 00:12:06.026 "uuid": "2d079110-906d-4d56-9f1f-b1f34383a182", 00:12:06.026 "strip_size_kb": 64, 00:12:06.026 "state": "configuring", 00:12:06.026 "raid_level": "raid0", 00:12:06.026 "superblock": true, 00:12:06.026 "num_base_bdevs": 3, 00:12:06.026 "num_base_bdevs_discovered": 1, 00:12:06.026 "num_base_bdevs_operational": 3, 00:12:06.026 "base_bdevs_list": [ 00:12:06.026 { 00:12:06.026 "name": "BaseBdev1", 00:12:06.026 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:06.026 "is_configured": true, 00:12:06.026 "data_offset": 2048, 00:12:06.026 "data_size": 63488 00:12:06.026 }, 00:12:06.026 { 00:12:06.026 "name": "BaseBdev2", 00:12:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.026 "is_configured": false, 00:12:06.026 "data_offset": 0, 00:12:06.026 "data_size": 0 00:12:06.026 }, 00:12:06.026 { 00:12:06.026 "name": "BaseBdev3", 00:12:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.026 "is_configured": false, 00:12:06.026 "data_offset": 0, 00:12:06.026 "data_size": 0 00:12:06.026 } 00:12:06.026 ] 00:12:06.026 }' 00:12:06.026 03:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:06.026 03:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 03:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:07.072 [2024-05-15 03:06:38.000076] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.072 [2024-05-15 03:06:38.000113] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14286b0 name Existed_Raid, state configuring 00:12:07.072 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:07.339 [2024-05-15 03:06:38.256798] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.339 [2024-05-15 03:06:38.258311] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.339 [2024-05-15 03:06:38.258341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.339 [2024-05-15 03:06:38.258350] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.339 [2024-05-15 03:06:38.258359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.339 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.596 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:07.596 "name": "Existed_Raid", 00:12:07.596 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:07.596 "strip_size_kb": 64, 00:12:07.596 "state": "configuring", 00:12:07.596 "raid_level": "raid0", 00:12:07.596 "superblock": true, 00:12:07.596 "num_base_bdevs": 3, 00:12:07.596 "num_base_bdevs_discovered": 1, 00:12:07.596 "num_base_bdevs_operational": 3, 00:12:07.596 "base_bdevs_list": [ 00:12:07.596 { 00:12:07.596 "name": "BaseBdev1", 00:12:07.596 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:07.596 "is_configured": true, 00:12:07.596 "data_offset": 2048, 00:12:07.596 "data_size": 63488 00:12:07.596 }, 00:12:07.596 { 00:12:07.596 "name": "BaseBdev2", 00:12:07.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.596 "is_configured": false, 00:12:07.596 "data_offset": 0, 00:12:07.596 "data_size": 0 00:12:07.596 }, 00:12:07.596 { 00:12:07.596 "name": "BaseBdev3", 00:12:07.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.596 "is_configured": false, 00:12:07.596 "data_offset": 0, 00:12:07.596 "data_size": 0 00:12:07.596 } 00:12:07.596 ] 00:12:07.596 }' 00:12:07.596 03:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:07.596 03:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.162 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.419 [2024-05-15 03:06:39.399107] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.419 BaseBdev2 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:08.419 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:08.677 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.935 [ 00:12:08.935 { 00:12:08.935 "name": "BaseBdev2", 00:12:08.935 "aliases": [ 00:12:08.935 "f5d8df51-e82f-408b-9767-4a04346385ac" 00:12:08.935 ], 00:12:08.935 "product_name": "Malloc disk", 00:12:08.935 "block_size": 512, 00:12:08.935 "num_blocks": 65536, 00:12:08.935 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:08.935 "assigned_rate_limits": { 00:12:08.935 "rw_ios_per_sec": 0, 00:12:08.935 "rw_mbytes_per_sec": 0, 00:12:08.935 "r_mbytes_per_sec": 0, 00:12:08.935 "w_mbytes_per_sec": 0 00:12:08.935 }, 00:12:08.935 "claimed": true, 00:12:08.935 "claim_type": "exclusive_write", 00:12:08.935 "zoned": false, 00:12:08.935 "supported_io_types": { 00:12:08.935 "read": true, 00:12:08.935 "write": true, 00:12:08.935 "unmap": true, 00:12:08.935 "write_zeroes": true, 00:12:08.935 "flush": true, 00:12:08.935 "reset": true, 00:12:08.935 "compare": false, 00:12:08.935 "compare_and_write": false, 00:12:08.935 "abort": true, 00:12:08.935 "nvme_admin": false, 00:12:08.935 "nvme_io": false 00:12:08.935 }, 00:12:08.935 "memory_domains": [ 00:12:08.935 { 00:12:08.935 "dma_device_id": "system", 00:12:08.935 "dma_device_type": 1 00:12:08.935 }, 00:12:08.935 { 00:12:08.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.935 "dma_device_type": 2 00:12:08.935 } 00:12:08.935 ], 00:12:08.935 "driver_specific": {} 00:12:08.935 } 00:12:08.935 ] 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.935 03:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.193 03:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:09.193 "name": "Existed_Raid", 00:12:09.193 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:09.193 "strip_size_kb": 64, 00:12:09.193 "state": "configuring", 00:12:09.193 "raid_level": "raid0", 00:12:09.193 "superblock": true, 00:12:09.193 "num_base_bdevs": 3, 00:12:09.193 "num_base_bdevs_discovered": 2, 00:12:09.193 "num_base_bdevs_operational": 3, 00:12:09.193 "base_bdevs_list": [ 00:12:09.193 { 00:12:09.193 "name": "BaseBdev1", 00:12:09.193 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:09.193 "is_configured": true, 00:12:09.193 "data_offset": 2048, 00:12:09.193 "data_size": 63488 00:12:09.193 }, 00:12:09.193 { 00:12:09.193 "name": "BaseBdev2", 00:12:09.193 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:09.193 "is_configured": true, 00:12:09.193 "data_offset": 2048, 00:12:09.193 "data_size": 63488 00:12:09.193 }, 00:12:09.193 { 00:12:09.193 "name": "BaseBdev3", 00:12:09.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.193 "is_configured": false, 00:12:09.193 "data_offset": 0, 00:12:09.193 "data_size": 0 00:12:09.193 } 00:12:09.193 ] 00:12:09.193 }' 00:12:09.193 03:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:09.193 03:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 03:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.018 [2024-05-15 03:06:41.036047] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.018 [2024-05-15 03:06:41.036218] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1429760 00:12:10.018 [2024-05-15 03:06:41.036231] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:10.018 [2024-05-15 03:06:41.036424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1440690 00:12:10.018 [2024-05-15 03:06:41.036557] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1429760 00:12:10.018 [2024-05-15 03:06:41.036565] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1429760 00:12:10.018 [2024-05-15 03:06:41.036666] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.018 BaseBdev3 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:10.018 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.276 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.534 [ 00:12:10.534 { 00:12:10.534 "name": "BaseBdev3", 00:12:10.534 "aliases": [ 00:12:10.535 "31fbaac2-d738-448b-b15f-03fd5143effc" 00:12:10.535 ], 00:12:10.535 "product_name": "Malloc disk", 00:12:10.535 "block_size": 512, 00:12:10.535 "num_blocks": 65536, 00:12:10.535 "uuid": "31fbaac2-d738-448b-b15f-03fd5143effc", 00:12:10.535 "assigned_rate_limits": { 00:12:10.535 "rw_ios_per_sec": 0, 00:12:10.535 "rw_mbytes_per_sec": 0, 00:12:10.535 "r_mbytes_per_sec": 0, 00:12:10.535 "w_mbytes_per_sec": 0 00:12:10.535 }, 00:12:10.535 "claimed": true, 00:12:10.535 "claim_type": "exclusive_write", 00:12:10.535 "zoned": false, 00:12:10.535 "supported_io_types": { 00:12:10.535 "read": true, 00:12:10.535 "write": true, 00:12:10.535 "unmap": true, 00:12:10.535 "write_zeroes": true, 00:12:10.535 "flush": true, 00:12:10.535 "reset": true, 00:12:10.535 "compare": false, 00:12:10.535 "compare_and_write": false, 00:12:10.535 "abort": true, 00:12:10.535 "nvme_admin": false, 00:12:10.535 "nvme_io": false 00:12:10.535 }, 00:12:10.535 "memory_domains": [ 00:12:10.535 { 00:12:10.535 "dma_device_id": "system", 00:12:10.535 "dma_device_type": 1 00:12:10.535 }, 00:12:10.535 { 00:12:10.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.535 "dma_device_type": 2 00:12:10.535 } 00:12:10.535 ], 00:12:10.535 "driver_specific": {} 00:12:10.535 } 00:12:10.535 ] 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.535 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.793 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:10.793 "name": "Existed_Raid", 00:12:10.793 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:10.793 "strip_size_kb": 64, 00:12:10.793 "state": "online", 00:12:10.793 "raid_level": "raid0", 00:12:10.793 "superblock": true, 00:12:10.793 "num_base_bdevs": 3, 00:12:10.793 "num_base_bdevs_discovered": 3, 00:12:10.793 "num_base_bdevs_operational": 3, 00:12:10.793 "base_bdevs_list": [ 00:12:10.793 { 00:12:10.793 "name": "BaseBdev1", 00:12:10.793 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:10.793 "is_configured": true, 00:12:10.793 "data_offset": 2048, 00:12:10.793 "data_size": 63488 00:12:10.793 }, 00:12:10.793 { 00:12:10.793 "name": "BaseBdev2", 00:12:10.793 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:10.793 "is_configured": true, 00:12:10.793 "data_offset": 2048, 00:12:10.793 "data_size": 63488 00:12:10.793 }, 00:12:10.793 { 00:12:10.793 "name": "BaseBdev3", 00:12:10.793 "uuid": "31fbaac2-d738-448b-b15f-03fd5143effc", 00:12:10.793 "is_configured": true, 00:12:10.793 "data_offset": 2048, 00:12:10.793 "data_size": 63488 00:12:10.793 } 00:12:10.793 ] 00:12:10.793 }' 00:12:10.793 03:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:10.793 03:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:11.358 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:11.616 [2024-05-15 03:06:42.644679] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.616 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:11.616 "name": "Existed_Raid", 00:12:11.616 "aliases": [ 00:12:11.616 "1188c708-5c00-4288-b4bc-f66203b458ff" 00:12:11.616 ], 00:12:11.616 "product_name": "Raid Volume", 00:12:11.616 "block_size": 512, 00:12:11.616 "num_blocks": 190464, 00:12:11.616 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:11.616 "assigned_rate_limits": { 00:12:11.616 "rw_ios_per_sec": 0, 00:12:11.616 "rw_mbytes_per_sec": 0, 00:12:11.616 "r_mbytes_per_sec": 0, 00:12:11.616 "w_mbytes_per_sec": 0 00:12:11.616 }, 00:12:11.616 "claimed": false, 00:12:11.616 "zoned": false, 00:12:11.616 "supported_io_types": { 00:12:11.616 "read": true, 00:12:11.616 "write": true, 00:12:11.616 "unmap": true, 00:12:11.616 "write_zeroes": true, 00:12:11.616 "flush": true, 00:12:11.616 "reset": true, 00:12:11.616 "compare": false, 00:12:11.616 "compare_and_write": false, 00:12:11.616 "abort": false, 00:12:11.616 "nvme_admin": false, 00:12:11.616 "nvme_io": false 00:12:11.616 }, 00:12:11.616 "memory_domains": [ 00:12:11.616 { 00:12:11.616 "dma_device_id": "system", 00:12:11.616 "dma_device_type": 1 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.616 "dma_device_type": 2 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "dma_device_id": "system", 00:12:11.616 "dma_device_type": 1 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.616 "dma_device_type": 2 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "dma_device_id": "system", 00:12:11.616 "dma_device_type": 1 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.616 "dma_device_type": 2 00:12:11.616 } 00:12:11.616 ], 00:12:11.616 "driver_specific": { 00:12:11.616 "raid": { 00:12:11.616 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:11.616 "strip_size_kb": 64, 00:12:11.616 "state": "online", 00:12:11.616 "raid_level": "raid0", 00:12:11.616 "superblock": true, 00:12:11.616 "num_base_bdevs": 3, 00:12:11.616 "num_base_bdevs_discovered": 3, 00:12:11.616 "num_base_bdevs_operational": 3, 00:12:11.616 "base_bdevs_list": [ 00:12:11.616 { 00:12:11.616 "name": "BaseBdev1", 00:12:11.616 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:11.616 "is_configured": true, 00:12:11.616 "data_offset": 2048, 00:12:11.616 "data_size": 63488 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "name": "BaseBdev2", 00:12:11.616 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:11.616 "is_configured": true, 00:12:11.616 "data_offset": 2048, 00:12:11.616 "data_size": 63488 00:12:11.616 }, 00:12:11.616 { 00:12:11.616 "name": "BaseBdev3", 00:12:11.616 "uuid": "31fbaac2-d738-448b-b15f-03fd5143effc", 00:12:11.616 "is_configured": true, 00:12:11.616 "data_offset": 2048, 00:12:11.616 "data_size": 63488 00:12:11.616 } 00:12:11.616 ] 00:12:11.616 } 00:12:11.616 } 00:12:11.616 }' 00:12:11.617 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.617 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:11.617 BaseBdev2 00:12:11.617 BaseBdev3' 00:12:11.617 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:11.617 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:11.617 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:11.875 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:11.875 "name": "BaseBdev1", 00:12:11.875 "aliases": [ 00:12:11.875 "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f" 00:12:11.875 ], 00:12:11.875 "product_name": "Malloc disk", 00:12:11.875 "block_size": 512, 00:12:11.875 "num_blocks": 65536, 00:12:11.875 "uuid": "5c8ba430-90f3-4d12-9fa4-d52a5f3adc1f", 00:12:11.875 "assigned_rate_limits": { 00:12:11.875 "rw_ios_per_sec": 0, 00:12:11.875 "rw_mbytes_per_sec": 0, 00:12:11.875 "r_mbytes_per_sec": 0, 00:12:11.875 "w_mbytes_per_sec": 0 00:12:11.875 }, 00:12:11.875 "claimed": true, 00:12:11.875 "claim_type": "exclusive_write", 00:12:11.875 "zoned": false, 00:12:11.875 "supported_io_types": { 00:12:11.875 "read": true, 00:12:11.875 "write": true, 00:12:11.875 "unmap": true, 00:12:11.875 "write_zeroes": true, 00:12:11.875 "flush": true, 00:12:11.875 "reset": true, 00:12:11.875 "compare": false, 00:12:11.875 "compare_and_write": false, 00:12:11.875 "abort": true, 00:12:11.875 "nvme_admin": false, 00:12:11.875 "nvme_io": false 00:12:11.875 }, 00:12:11.875 "memory_domains": [ 00:12:11.875 { 00:12:11.875 "dma_device_id": "system", 00:12:11.875 "dma_device_type": 1 00:12:11.875 }, 00:12:11.875 { 00:12:11.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.875 "dma_device_type": 2 00:12:11.875 } 00:12:11.875 ], 00:12:11.875 "driver_specific": {} 00:12:11.875 }' 00:12:11.875 03:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:11.875 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.133 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:12.391 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:12.391 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:12.391 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:12.391 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:12.391 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:12.650 "name": "BaseBdev2", 00:12:12.650 "aliases": [ 00:12:12.650 "f5d8df51-e82f-408b-9767-4a04346385ac" 00:12:12.650 ], 00:12:12.650 "product_name": "Malloc disk", 00:12:12.650 "block_size": 512, 00:12:12.650 "num_blocks": 65536, 00:12:12.650 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:12.650 "assigned_rate_limits": { 00:12:12.650 "rw_ios_per_sec": 0, 00:12:12.650 "rw_mbytes_per_sec": 0, 00:12:12.650 "r_mbytes_per_sec": 0, 00:12:12.650 "w_mbytes_per_sec": 0 00:12:12.650 }, 00:12:12.650 "claimed": true, 00:12:12.650 "claim_type": "exclusive_write", 00:12:12.650 "zoned": false, 00:12:12.650 "supported_io_types": { 00:12:12.650 "read": true, 00:12:12.650 "write": true, 00:12:12.650 "unmap": true, 00:12:12.650 "write_zeroes": true, 00:12:12.650 "flush": true, 00:12:12.650 "reset": true, 00:12:12.650 "compare": false, 00:12:12.650 "compare_and_write": false, 00:12:12.650 "abort": true, 00:12:12.650 "nvme_admin": false, 00:12:12.650 "nvme_io": false 00:12:12.650 }, 00:12:12.650 "memory_domains": [ 00:12:12.650 { 00:12:12.650 "dma_device_id": "system", 00:12:12.650 "dma_device_type": 1 00:12:12.650 }, 00:12:12.650 { 00:12:12.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.650 "dma_device_type": 2 00:12:12.650 } 00:12:12.650 ], 00:12:12.650 "driver_specific": {} 00:12:12.650 }' 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:12.650 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:12.908 03:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:13.166 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:13.166 "name": "BaseBdev3", 00:12:13.166 "aliases": [ 00:12:13.166 "31fbaac2-d738-448b-b15f-03fd5143effc" 00:12:13.166 ], 00:12:13.166 "product_name": "Malloc disk", 00:12:13.166 "block_size": 512, 00:12:13.166 "num_blocks": 65536, 00:12:13.166 "uuid": "31fbaac2-d738-448b-b15f-03fd5143effc", 00:12:13.166 "assigned_rate_limits": { 00:12:13.166 "rw_ios_per_sec": 0, 00:12:13.166 "rw_mbytes_per_sec": 0, 00:12:13.166 "r_mbytes_per_sec": 0, 00:12:13.166 "w_mbytes_per_sec": 0 00:12:13.166 }, 00:12:13.166 "claimed": true, 00:12:13.166 "claim_type": "exclusive_write", 00:12:13.166 "zoned": false, 00:12:13.166 "supported_io_types": { 00:12:13.166 "read": true, 00:12:13.166 "write": true, 00:12:13.166 "unmap": true, 00:12:13.166 "write_zeroes": true, 00:12:13.166 "flush": true, 00:12:13.166 "reset": true, 00:12:13.166 "compare": false, 00:12:13.166 "compare_and_write": false, 00:12:13.166 "abort": true, 00:12:13.166 "nvme_admin": false, 00:12:13.166 "nvme_io": false 00:12:13.166 }, 00:12:13.166 "memory_domains": [ 00:12:13.166 { 00:12:13.166 "dma_device_id": "system", 00:12:13.166 "dma_device_type": 1 00:12:13.166 }, 00:12:13.166 { 00:12:13.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.166 "dma_device_type": 2 00:12:13.166 } 00:12:13.166 ], 00:12:13.166 "driver_specific": {} 00:12:13.166 }' 00:12:13.166 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.166 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.424 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.682 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:13.682 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:13.941 [2024-05-15 03:06:44.846397] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:13.941 [2024-05-15 03:06:44.846423] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.941 [2024-05-15 03:06:44.846467] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.941 03:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.199 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:14.199 "name": "Existed_Raid", 00:12:14.199 "uuid": "1188c708-5c00-4288-b4bc-f66203b458ff", 00:12:14.199 "strip_size_kb": 64, 00:12:14.199 "state": "offline", 00:12:14.199 "raid_level": "raid0", 00:12:14.199 "superblock": true, 00:12:14.199 "num_base_bdevs": 3, 00:12:14.199 "num_base_bdevs_discovered": 2, 00:12:14.199 "num_base_bdevs_operational": 2, 00:12:14.199 "base_bdevs_list": [ 00:12:14.199 { 00:12:14.199 "name": null, 00:12:14.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.199 "is_configured": false, 00:12:14.199 "data_offset": 2048, 00:12:14.199 "data_size": 63488 00:12:14.199 }, 00:12:14.199 { 00:12:14.199 "name": "BaseBdev2", 00:12:14.199 "uuid": "f5d8df51-e82f-408b-9767-4a04346385ac", 00:12:14.199 "is_configured": true, 00:12:14.199 "data_offset": 2048, 00:12:14.199 "data_size": 63488 00:12:14.199 }, 00:12:14.199 { 00:12:14.199 "name": "BaseBdev3", 00:12:14.199 "uuid": "31fbaac2-d738-448b-b15f-03fd5143effc", 00:12:14.199 "is_configured": true, 00:12:14.199 "data_offset": 2048, 00:12:14.199 "data_size": 63488 00:12:14.199 } 00:12:14.199 ] 00:12:14.199 }' 00:12:14.199 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:14.199 03:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:14.766 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.766 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.766 03:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:15.025 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:15.025 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.025 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:15.284 [2024-05-15 03:06:46.231431] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.284 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.284 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.284 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.284 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:15.543 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:15.543 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.543 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:15.802 [2024-05-15 03:06:46.755300] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.802 [2024-05-15 03:06:46.755345] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1429760 name Existed_Raid, state offline 00:12:15.802 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.802 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.802 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.802 03:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:16.060 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.319 BaseBdev2 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:16.319 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:16.577 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.835 [ 00:12:16.835 { 00:12:16.835 "name": "BaseBdev2", 00:12:16.835 "aliases": [ 00:12:16.835 "995eed71-a5a2-4b83-95a6-13d5c9f71f94" 00:12:16.835 ], 00:12:16.835 "product_name": "Malloc disk", 00:12:16.835 "block_size": 512, 00:12:16.835 "num_blocks": 65536, 00:12:16.835 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:16.835 "assigned_rate_limits": { 00:12:16.835 "rw_ios_per_sec": 0, 00:12:16.835 "rw_mbytes_per_sec": 0, 00:12:16.835 "r_mbytes_per_sec": 0, 00:12:16.835 "w_mbytes_per_sec": 0 00:12:16.835 }, 00:12:16.835 "claimed": false, 00:12:16.835 "zoned": false, 00:12:16.835 "supported_io_types": { 00:12:16.835 "read": true, 00:12:16.835 "write": true, 00:12:16.835 "unmap": true, 00:12:16.835 "write_zeroes": true, 00:12:16.835 "flush": true, 00:12:16.835 "reset": true, 00:12:16.835 "compare": false, 00:12:16.835 "compare_and_write": false, 00:12:16.835 "abort": true, 00:12:16.835 "nvme_admin": false, 00:12:16.835 "nvme_io": false 00:12:16.835 }, 00:12:16.835 "memory_domains": [ 00:12:16.835 { 00:12:16.835 "dma_device_id": "system", 00:12:16.835 "dma_device_type": 1 00:12:16.835 }, 00:12:16.835 { 00:12:16.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.835 "dma_device_type": 2 00:12:16.835 } 00:12:16.836 ], 00:12:16.836 "driver_specific": {} 00:12:16.836 } 00:12:16.836 ] 00:12:16.836 03:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:16.836 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:16.836 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:16.836 03:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.094 BaseBdev3 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:17.094 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:17.352 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.611 [ 00:12:17.611 { 00:12:17.611 "name": "BaseBdev3", 00:12:17.611 "aliases": [ 00:12:17.611 "b5b5dab2-5f96-4536-aeab-9b86640f3a09" 00:12:17.611 ], 00:12:17.611 "product_name": "Malloc disk", 00:12:17.611 "block_size": 512, 00:12:17.611 "num_blocks": 65536, 00:12:17.611 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:17.611 "assigned_rate_limits": { 00:12:17.611 "rw_ios_per_sec": 0, 00:12:17.611 "rw_mbytes_per_sec": 0, 00:12:17.611 "r_mbytes_per_sec": 0, 00:12:17.611 "w_mbytes_per_sec": 0 00:12:17.611 }, 00:12:17.611 "claimed": false, 00:12:17.611 "zoned": false, 00:12:17.611 "supported_io_types": { 00:12:17.611 "read": true, 00:12:17.611 "write": true, 00:12:17.611 "unmap": true, 00:12:17.611 "write_zeroes": true, 00:12:17.611 "flush": true, 00:12:17.611 "reset": true, 00:12:17.611 "compare": false, 00:12:17.611 "compare_and_write": false, 00:12:17.611 "abort": true, 00:12:17.611 "nvme_admin": false, 00:12:17.611 "nvme_io": false 00:12:17.611 }, 00:12:17.611 "memory_domains": [ 00:12:17.611 { 00:12:17.611 "dma_device_id": "system", 00:12:17.611 "dma_device_type": 1 00:12:17.611 }, 00:12:17.611 { 00:12:17.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.611 "dma_device_type": 2 00:12:17.611 } 00:12:17.611 ], 00:12:17.611 "driver_specific": {} 00:12:17.611 } 00:12:17.611 ] 00:12:17.611 03:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:17.611 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:17.611 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:17.611 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:17.611 [2024-05-15 03:06:48.765313] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.611 [2024-05-15 03:06:48.765352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.611 [2024-05-15 03:06:48.765371] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.611 [2024-05-15 03:06:48.766966] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.870 03:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.128 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:18.128 "name": "Existed_Raid", 00:12:18.128 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:18.128 "strip_size_kb": 64, 00:12:18.128 "state": "configuring", 00:12:18.128 "raid_level": "raid0", 00:12:18.128 "superblock": true, 00:12:18.128 "num_base_bdevs": 3, 00:12:18.128 "num_base_bdevs_discovered": 2, 00:12:18.128 "num_base_bdevs_operational": 3, 00:12:18.128 "base_bdevs_list": [ 00:12:18.128 { 00:12:18.128 "name": "BaseBdev1", 00:12:18.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.128 "is_configured": false, 00:12:18.128 "data_offset": 0, 00:12:18.128 "data_size": 0 00:12:18.128 }, 00:12:18.128 { 00:12:18.128 "name": "BaseBdev2", 00:12:18.128 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:18.128 "is_configured": true, 00:12:18.128 "data_offset": 2048, 00:12:18.128 "data_size": 63488 00:12:18.128 }, 00:12:18.128 { 00:12:18.128 "name": "BaseBdev3", 00:12:18.128 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:18.128 "is_configured": true, 00:12:18.128 "data_offset": 2048, 00:12:18.128 "data_size": 63488 00:12:18.128 } 00:12:18.128 ] 00:12:18.128 }' 00:12:18.128 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:18.128 03:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.695 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:18.954 [2024-05-15 03:06:49.892271] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.954 03:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.213 03:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:19.213 "name": "Existed_Raid", 00:12:19.213 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:19.213 "strip_size_kb": 64, 00:12:19.213 "state": "configuring", 00:12:19.213 "raid_level": "raid0", 00:12:19.214 "superblock": true, 00:12:19.214 "num_base_bdevs": 3, 00:12:19.214 "num_base_bdevs_discovered": 1, 00:12:19.214 "num_base_bdevs_operational": 3, 00:12:19.214 "base_bdevs_list": [ 00:12:19.214 { 00:12:19.214 "name": "BaseBdev1", 00:12:19.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.214 "is_configured": false, 00:12:19.214 "data_offset": 0, 00:12:19.214 "data_size": 0 00:12:19.214 }, 00:12:19.214 { 00:12:19.214 "name": null, 00:12:19.214 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:19.214 "is_configured": false, 00:12:19.214 "data_offset": 2048, 00:12:19.214 "data_size": 63488 00:12:19.214 }, 00:12:19.214 { 00:12:19.214 "name": "BaseBdev3", 00:12:19.214 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:19.214 "is_configured": true, 00:12:19.214 "data_offset": 2048, 00:12:19.214 "data_size": 63488 00:12:19.214 } 00:12:19.214 ] 00:12:19.214 }' 00:12:19.214 03:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:19.214 03:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.782 03:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.782 03:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:20.040 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:12:20.040 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.298 [2024-05-15 03:06:51.264473] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.298 BaseBdev1 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:20.298 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:20.299 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.557 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.816 [ 00:12:20.816 { 00:12:20.816 "name": "BaseBdev1", 00:12:20.816 "aliases": [ 00:12:20.816 "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb" 00:12:20.816 ], 00:12:20.816 "product_name": "Malloc disk", 00:12:20.816 "block_size": 512, 00:12:20.816 "num_blocks": 65536, 00:12:20.816 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:20.816 "assigned_rate_limits": { 00:12:20.816 "rw_ios_per_sec": 0, 00:12:20.816 "rw_mbytes_per_sec": 0, 00:12:20.816 "r_mbytes_per_sec": 0, 00:12:20.816 "w_mbytes_per_sec": 0 00:12:20.816 }, 00:12:20.816 "claimed": true, 00:12:20.816 "claim_type": "exclusive_write", 00:12:20.816 "zoned": false, 00:12:20.816 "supported_io_types": { 00:12:20.816 "read": true, 00:12:20.816 "write": true, 00:12:20.816 "unmap": true, 00:12:20.816 "write_zeroes": true, 00:12:20.816 "flush": true, 00:12:20.816 "reset": true, 00:12:20.816 "compare": false, 00:12:20.816 "compare_and_write": false, 00:12:20.816 "abort": true, 00:12:20.816 "nvme_admin": false, 00:12:20.816 "nvme_io": false 00:12:20.816 }, 00:12:20.816 "memory_domains": [ 00:12:20.816 { 00:12:20.816 "dma_device_id": "system", 00:12:20.816 "dma_device_type": 1 00:12:20.816 }, 00:12:20.816 { 00:12:20.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.816 "dma_device_type": 2 00:12:20.816 } 00:12:20.816 ], 00:12:20.816 "driver_specific": {} 00:12:20.816 } 00:12:20.816 ] 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:20.816 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:20.817 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:20.817 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.817 03:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.075 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:21.076 "name": "Existed_Raid", 00:12:21.076 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:21.076 "strip_size_kb": 64, 00:12:21.076 "state": "configuring", 00:12:21.076 "raid_level": "raid0", 00:12:21.076 "superblock": true, 00:12:21.076 "num_base_bdevs": 3, 00:12:21.076 "num_base_bdevs_discovered": 2, 00:12:21.076 "num_base_bdevs_operational": 3, 00:12:21.076 "base_bdevs_list": [ 00:12:21.076 { 00:12:21.076 "name": "BaseBdev1", 00:12:21.076 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:21.076 "is_configured": true, 00:12:21.076 "data_offset": 2048, 00:12:21.076 "data_size": 63488 00:12:21.076 }, 00:12:21.076 { 00:12:21.076 "name": null, 00:12:21.076 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:21.076 "is_configured": false, 00:12:21.076 "data_offset": 2048, 00:12:21.076 "data_size": 63488 00:12:21.076 }, 00:12:21.076 { 00:12:21.076 "name": "BaseBdev3", 00:12:21.076 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:21.076 "is_configured": true, 00:12:21.076 "data_offset": 2048, 00:12:21.076 "data_size": 63488 00:12:21.076 } 00:12:21.076 ] 00:12:21.076 }' 00:12:21.076 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:21.076 03:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.643 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.643 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.901 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.901 03:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:22.158 [2024-05-15 03:06:53.153596] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.159 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.418 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:22.418 "name": "Existed_Raid", 00:12:22.418 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:22.418 "strip_size_kb": 64, 00:12:22.418 "state": "configuring", 00:12:22.418 "raid_level": "raid0", 00:12:22.418 "superblock": true, 00:12:22.418 "num_base_bdevs": 3, 00:12:22.418 "num_base_bdevs_discovered": 1, 00:12:22.418 "num_base_bdevs_operational": 3, 00:12:22.418 "base_bdevs_list": [ 00:12:22.418 { 00:12:22.418 "name": "BaseBdev1", 00:12:22.418 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:22.418 "is_configured": true, 00:12:22.418 "data_offset": 2048, 00:12:22.418 "data_size": 63488 00:12:22.418 }, 00:12:22.418 { 00:12:22.418 "name": null, 00:12:22.418 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:22.418 "is_configured": false, 00:12:22.418 "data_offset": 2048, 00:12:22.418 "data_size": 63488 00:12:22.418 }, 00:12:22.418 { 00:12:22.418 "name": null, 00:12:22.418 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:22.418 "is_configured": false, 00:12:22.418 "data_offset": 2048, 00:12:22.418 "data_size": 63488 00:12:22.418 } 00:12:22.418 ] 00:12:22.418 }' 00:12:22.418 03:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:22.418 03:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.986 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.986 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.246 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:12:23.246 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:23.504 [2024-05-15 03:06:54.529293] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.504 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.763 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:23.763 "name": "Existed_Raid", 00:12:23.763 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:23.763 "strip_size_kb": 64, 00:12:23.763 "state": "configuring", 00:12:23.763 "raid_level": "raid0", 00:12:23.763 "superblock": true, 00:12:23.763 "num_base_bdevs": 3, 00:12:23.763 "num_base_bdevs_discovered": 2, 00:12:23.763 "num_base_bdevs_operational": 3, 00:12:23.763 "base_bdevs_list": [ 00:12:23.763 { 00:12:23.763 "name": "BaseBdev1", 00:12:23.763 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:23.763 "is_configured": true, 00:12:23.763 "data_offset": 2048, 00:12:23.763 "data_size": 63488 00:12:23.763 }, 00:12:23.763 { 00:12:23.763 "name": null, 00:12:23.763 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:23.763 "is_configured": false, 00:12:23.763 "data_offset": 2048, 00:12:23.763 "data_size": 63488 00:12:23.763 }, 00:12:23.763 { 00:12:23.763 "name": "BaseBdev3", 00:12:23.763 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:23.763 "is_configured": true, 00:12:23.763 "data_offset": 2048, 00:12:23.763 "data_size": 63488 00:12:23.763 } 00:12:23.763 ] 00:12:23.763 }' 00:12:23.763 03:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:23.763 03:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.328 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.328 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.585 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:12:24.585 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:24.842 [2024-05-15 03:06:55.921043] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.842 03:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.100 03:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:25.100 "name": "Existed_Raid", 00:12:25.100 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:25.100 "strip_size_kb": 64, 00:12:25.100 "state": "configuring", 00:12:25.100 "raid_level": "raid0", 00:12:25.100 "superblock": true, 00:12:25.100 "num_base_bdevs": 3, 00:12:25.100 "num_base_bdevs_discovered": 1, 00:12:25.100 "num_base_bdevs_operational": 3, 00:12:25.100 "base_bdevs_list": [ 00:12:25.100 { 00:12:25.100 "name": null, 00:12:25.100 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:25.100 "is_configured": false, 00:12:25.100 "data_offset": 2048, 00:12:25.100 "data_size": 63488 00:12:25.100 }, 00:12:25.100 { 00:12:25.100 "name": null, 00:12:25.100 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:25.100 "is_configured": false, 00:12:25.100 "data_offset": 2048, 00:12:25.100 "data_size": 63488 00:12:25.100 }, 00:12:25.100 { 00:12:25.100 "name": "BaseBdev3", 00:12:25.100 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:25.100 "is_configured": true, 00:12:25.100 "data_offset": 2048, 00:12:25.100 "data_size": 63488 00:12:25.100 } 00:12:25.100 ] 00:12:25.100 }' 00:12:25.100 03:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:25.100 03:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.034 03:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.034 03:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.034 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:12:26.034 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:26.292 [2024-05-15 03:06:57.343467] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.292 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.550 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:26.550 "name": "Existed_Raid", 00:12:26.550 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:26.550 "strip_size_kb": 64, 00:12:26.550 "state": "configuring", 00:12:26.550 "raid_level": "raid0", 00:12:26.550 "superblock": true, 00:12:26.550 "num_base_bdevs": 3, 00:12:26.550 "num_base_bdevs_discovered": 2, 00:12:26.550 "num_base_bdevs_operational": 3, 00:12:26.550 "base_bdevs_list": [ 00:12:26.550 { 00:12:26.550 "name": null, 00:12:26.550 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:26.550 "is_configured": false, 00:12:26.550 "data_offset": 2048, 00:12:26.550 "data_size": 63488 00:12:26.550 }, 00:12:26.550 { 00:12:26.550 "name": "BaseBdev2", 00:12:26.550 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:26.550 "is_configured": true, 00:12:26.550 "data_offset": 2048, 00:12:26.550 "data_size": 63488 00:12:26.550 }, 00:12:26.550 { 00:12:26.550 "name": "BaseBdev3", 00:12:26.550 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:26.550 "is_configured": true, 00:12:26.550 "data_offset": 2048, 00:12:26.550 "data_size": 63488 00:12:26.550 } 00:12:26.550 ] 00:12:26.550 }' 00:12:26.550 03:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:26.550 03:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.117 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.117 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.375 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:12:27.375 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.375 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.633 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1fb16ab4-0bc7-4703-b45f-4c5b8b942afb 00:12:27.892 [2024-05-15 03:06:58.892460] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:27.892 [2024-05-15 03:06:58.892612] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1427c20 00:12:27.892 [2024-05-15 03:06:58.892625] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:27.892 [2024-05-15 03:06:58.892817] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ce7a0 00:12:27.892 [2024-05-15 03:06:58.892955] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1427c20 00:12:27.892 [2024-05-15 03:06:58.892964] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1427c20 00:12:27.892 [2024-05-15 03:06:58.893059] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.892 NewBaseBdev 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:27.892 03:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:28.152 03:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.416 [ 00:12:28.416 { 00:12:28.416 "name": "NewBaseBdev", 00:12:28.416 "aliases": [ 00:12:28.416 "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb" 00:12:28.416 ], 00:12:28.416 "product_name": "Malloc disk", 00:12:28.416 "block_size": 512, 00:12:28.416 "num_blocks": 65536, 00:12:28.416 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:28.416 "assigned_rate_limits": { 00:12:28.416 "rw_ios_per_sec": 0, 00:12:28.416 "rw_mbytes_per_sec": 0, 00:12:28.416 "r_mbytes_per_sec": 0, 00:12:28.416 "w_mbytes_per_sec": 0 00:12:28.416 }, 00:12:28.416 "claimed": true, 00:12:28.416 "claim_type": "exclusive_write", 00:12:28.416 "zoned": false, 00:12:28.416 "supported_io_types": { 00:12:28.416 "read": true, 00:12:28.416 "write": true, 00:12:28.416 "unmap": true, 00:12:28.416 "write_zeroes": true, 00:12:28.416 "flush": true, 00:12:28.416 "reset": true, 00:12:28.416 "compare": false, 00:12:28.416 "compare_and_write": false, 00:12:28.416 "abort": true, 00:12:28.416 "nvme_admin": false, 00:12:28.416 "nvme_io": false 00:12:28.416 }, 00:12:28.416 "memory_domains": [ 00:12:28.416 { 00:12:28.416 "dma_device_id": "system", 00:12:28.416 "dma_device_type": 1 00:12:28.416 }, 00:12:28.416 { 00:12:28.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.416 "dma_device_type": 2 00:12:28.416 } 00:12:28.416 ], 00:12:28.416 "driver_specific": {} 00:12:28.416 } 00:12:28.416 ] 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.416 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.674 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:28.674 "name": "Existed_Raid", 00:12:28.674 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:28.674 "strip_size_kb": 64, 00:12:28.674 "state": "online", 00:12:28.674 "raid_level": "raid0", 00:12:28.674 "superblock": true, 00:12:28.674 "num_base_bdevs": 3, 00:12:28.674 "num_base_bdevs_discovered": 3, 00:12:28.674 "num_base_bdevs_operational": 3, 00:12:28.674 "base_bdevs_list": [ 00:12:28.674 { 00:12:28.674 "name": "NewBaseBdev", 00:12:28.674 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:28.674 "is_configured": true, 00:12:28.674 "data_offset": 2048, 00:12:28.674 "data_size": 63488 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "name": "BaseBdev2", 00:12:28.674 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:28.674 "is_configured": true, 00:12:28.674 "data_offset": 2048, 00:12:28.674 "data_size": 63488 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "name": "BaseBdev3", 00:12:28.674 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:28.674 "is_configured": true, 00:12:28.674 "data_offset": 2048, 00:12:28.674 "data_size": 63488 00:12:28.674 } 00:12:28.674 ] 00:12:28.674 }' 00:12:28.674 03:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:28.674 03:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:29.238 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:29.495 [2024-05-15 03:07:00.513101] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.495 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:29.495 "name": "Existed_Raid", 00:12:29.495 "aliases": [ 00:12:29.495 "3d93fcfc-acde-4866-ab37-0206e15c2189" 00:12:29.495 ], 00:12:29.495 "product_name": "Raid Volume", 00:12:29.495 "block_size": 512, 00:12:29.495 "num_blocks": 190464, 00:12:29.495 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:29.495 "assigned_rate_limits": { 00:12:29.495 "rw_ios_per_sec": 0, 00:12:29.495 "rw_mbytes_per_sec": 0, 00:12:29.495 "r_mbytes_per_sec": 0, 00:12:29.495 "w_mbytes_per_sec": 0 00:12:29.495 }, 00:12:29.495 "claimed": false, 00:12:29.495 "zoned": false, 00:12:29.495 "supported_io_types": { 00:12:29.495 "read": true, 00:12:29.495 "write": true, 00:12:29.495 "unmap": true, 00:12:29.495 "write_zeroes": true, 00:12:29.495 "flush": true, 00:12:29.495 "reset": true, 00:12:29.495 "compare": false, 00:12:29.495 "compare_and_write": false, 00:12:29.495 "abort": false, 00:12:29.495 "nvme_admin": false, 00:12:29.495 "nvme_io": false 00:12:29.495 }, 00:12:29.495 "memory_domains": [ 00:12:29.495 { 00:12:29.495 "dma_device_id": "system", 00:12:29.495 "dma_device_type": 1 00:12:29.495 }, 00:12:29.495 { 00:12:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.495 "dma_device_type": 2 00:12:29.495 }, 00:12:29.495 { 00:12:29.495 "dma_device_id": "system", 00:12:29.495 "dma_device_type": 1 00:12:29.495 }, 00:12:29.495 { 00:12:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.496 "dma_device_type": 2 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "dma_device_id": "system", 00:12:29.496 "dma_device_type": 1 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.496 "dma_device_type": 2 00:12:29.496 } 00:12:29.496 ], 00:12:29.496 "driver_specific": { 00:12:29.496 "raid": { 00:12:29.496 "uuid": "3d93fcfc-acde-4866-ab37-0206e15c2189", 00:12:29.496 "strip_size_kb": 64, 00:12:29.496 "state": "online", 00:12:29.496 "raid_level": "raid0", 00:12:29.496 "superblock": true, 00:12:29.496 "num_base_bdevs": 3, 00:12:29.496 "num_base_bdevs_discovered": 3, 00:12:29.496 "num_base_bdevs_operational": 3, 00:12:29.496 "base_bdevs_list": [ 00:12:29.496 { 00:12:29.496 "name": "NewBaseBdev", 00:12:29.496 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:29.496 "is_configured": true, 00:12:29.496 "data_offset": 2048, 00:12:29.496 "data_size": 63488 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "name": "BaseBdev2", 00:12:29.496 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:29.496 "is_configured": true, 00:12:29.496 "data_offset": 2048, 00:12:29.496 "data_size": 63488 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "name": "BaseBdev3", 00:12:29.496 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:29.496 "is_configured": true, 00:12:29.496 "data_offset": 2048, 00:12:29.496 "data_size": 63488 00:12:29.496 } 00:12:29.496 ] 00:12:29.496 } 00:12:29.496 } 00:12:29.496 }' 00:12:29.496 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.496 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:12:29.496 BaseBdev2 00:12:29.496 BaseBdev3' 00:12:29.496 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:29.496 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:29.496 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:29.753 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:29.753 "name": "NewBaseBdev", 00:12:29.753 "aliases": [ 00:12:29.753 "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb" 00:12:29.753 ], 00:12:29.753 "product_name": "Malloc disk", 00:12:29.753 "block_size": 512, 00:12:29.753 "num_blocks": 65536, 00:12:29.753 "uuid": "1fb16ab4-0bc7-4703-b45f-4c5b8b942afb", 00:12:29.753 "assigned_rate_limits": { 00:12:29.753 "rw_ios_per_sec": 0, 00:12:29.753 "rw_mbytes_per_sec": 0, 00:12:29.753 "r_mbytes_per_sec": 0, 00:12:29.753 "w_mbytes_per_sec": 0 00:12:29.753 }, 00:12:29.753 "claimed": true, 00:12:29.753 "claim_type": "exclusive_write", 00:12:29.753 "zoned": false, 00:12:29.753 "supported_io_types": { 00:12:29.753 "read": true, 00:12:29.753 "write": true, 00:12:29.753 "unmap": true, 00:12:29.753 "write_zeroes": true, 00:12:29.753 "flush": true, 00:12:29.753 "reset": true, 00:12:29.753 "compare": false, 00:12:29.753 "compare_and_write": false, 00:12:29.753 "abort": true, 00:12:29.753 "nvme_admin": false, 00:12:29.753 "nvme_io": false 00:12:29.753 }, 00:12:29.753 "memory_domains": [ 00:12:29.753 { 00:12:29.753 "dma_device_id": "system", 00:12:29.753 "dma_device_type": 1 00:12:29.753 }, 00:12:29.753 { 00:12:29.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.753 "dma_device_type": 2 00:12:29.753 } 00:12:29.753 ], 00:12:29.753 "driver_specific": {} 00:12:29.753 }' 00:12:29.753 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:29.753 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:30.010 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:30.010 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:30.010 03:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:30.010 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:30.010 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:30.010 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:30.010 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:30.010 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:30.296 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:30.296 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:30.296 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:30.296 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:30.296 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:30.576 "name": "BaseBdev2", 00:12:30.576 "aliases": [ 00:12:30.576 "995eed71-a5a2-4b83-95a6-13d5c9f71f94" 00:12:30.576 ], 00:12:30.576 "product_name": "Malloc disk", 00:12:30.576 "block_size": 512, 00:12:30.576 "num_blocks": 65536, 00:12:30.576 "uuid": "995eed71-a5a2-4b83-95a6-13d5c9f71f94", 00:12:30.576 "assigned_rate_limits": { 00:12:30.576 "rw_ios_per_sec": 0, 00:12:30.576 "rw_mbytes_per_sec": 0, 00:12:30.576 "r_mbytes_per_sec": 0, 00:12:30.576 "w_mbytes_per_sec": 0 00:12:30.576 }, 00:12:30.576 "claimed": true, 00:12:30.576 "claim_type": "exclusive_write", 00:12:30.576 "zoned": false, 00:12:30.576 "supported_io_types": { 00:12:30.576 "read": true, 00:12:30.576 "write": true, 00:12:30.576 "unmap": true, 00:12:30.576 "write_zeroes": true, 00:12:30.576 "flush": true, 00:12:30.576 "reset": true, 00:12:30.576 "compare": false, 00:12:30.576 "compare_and_write": false, 00:12:30.576 "abort": true, 00:12:30.576 "nvme_admin": false, 00:12:30.576 "nvme_io": false 00:12:30.576 }, 00:12:30.576 "memory_domains": [ 00:12:30.576 { 00:12:30.576 "dma_device_id": "system", 00:12:30.576 "dma_device_type": 1 00:12:30.576 }, 00:12:30.576 { 00:12:30.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.576 "dma_device_type": 2 00:12:30.576 } 00:12:30.576 ], 00:12:30.576 "driver_specific": {} 00:12:30.576 }' 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:30.576 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:30.835 03:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:31.093 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:31.093 "name": "BaseBdev3", 00:12:31.093 "aliases": [ 00:12:31.093 "b5b5dab2-5f96-4536-aeab-9b86640f3a09" 00:12:31.093 ], 00:12:31.093 "product_name": "Malloc disk", 00:12:31.093 "block_size": 512, 00:12:31.093 "num_blocks": 65536, 00:12:31.093 "uuid": "b5b5dab2-5f96-4536-aeab-9b86640f3a09", 00:12:31.093 "assigned_rate_limits": { 00:12:31.093 "rw_ios_per_sec": 0, 00:12:31.093 "rw_mbytes_per_sec": 0, 00:12:31.093 "r_mbytes_per_sec": 0, 00:12:31.093 "w_mbytes_per_sec": 0 00:12:31.093 }, 00:12:31.093 "claimed": true, 00:12:31.093 "claim_type": "exclusive_write", 00:12:31.093 "zoned": false, 00:12:31.093 "supported_io_types": { 00:12:31.093 "read": true, 00:12:31.093 "write": true, 00:12:31.093 "unmap": true, 00:12:31.093 "write_zeroes": true, 00:12:31.093 "flush": true, 00:12:31.093 "reset": true, 00:12:31.093 "compare": false, 00:12:31.093 "compare_and_write": false, 00:12:31.093 "abort": true, 00:12:31.093 "nvme_admin": false, 00:12:31.093 "nvme_io": false 00:12:31.093 }, 00:12:31.093 "memory_domains": [ 00:12:31.093 { 00:12:31.093 "dma_device_id": "system", 00:12:31.093 "dma_device_type": 1 00:12:31.093 }, 00:12:31.093 { 00:12:31.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.093 "dma_device_type": 2 00:12:31.093 } 00:12:31.093 ], 00:12:31.093 "driver_specific": {} 00:12:31.093 }' 00:12:31.093 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:31.093 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:31.093 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:31.093 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:31.350 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.608 [2024-05-15 03:07:02.698710] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.608 [2024-05-15 03:07:02.698735] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.608 [2024-05-15 03:07:02.698789] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.608 [2024-05-15 03:07:02.698842] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.608 [2024-05-15 03:07:02.698859] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1427c20 name Existed_Raid, state offline 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4065544 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4065544 ']' 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4065544 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4065544 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4065544' 00:12:31.608 killing process with pid 4065544 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4065544 00:12:31.608 [2024-05-15 03:07:02.761389] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.608 03:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4065544 00:12:31.866 [2024-05-15 03:07:02.806316] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.126 03:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:12:32.126 00:12:32.126 real 0m29.722s 00:12:32.126 user 0m55.566s 00:12:32.126 sys 0m4.118s 00:12:32.126 03:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:32.126 03:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.126 ************************************ 00:12:32.126 END TEST raid_state_function_test_sb 00:12:32.126 ************************************ 00:12:32.126 03:07:03 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:32.126 03:07:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:32.126 03:07:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.126 03:07:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.126 ************************************ 00:12:32.126 START TEST raid_superblock_test 00:12:32.126 ************************************ 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4070913 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4070913 /var/tmp/spdk-raid.sock 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4070913 ']' 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:32.126 03:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.385 [2024-05-15 03:07:03.298925] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:32.385 [2024-05-15 03:07:03.298978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070913 ] 00:12:32.385 [2024-05-15 03:07:03.396454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.385 [2024-05-15 03:07:03.489398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.643 [2024-05-15 03:07:03.554418] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.643 [2024-05-15 03:07:03.554441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:33.209 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:33.468 malloc1 00:12:33.468 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.725 [2024-05-15 03:07:04.740068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.725 [2024-05-15 03:07:04.740115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.726 [2024-05-15 03:07:04.740136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d09a00 00:12:33.726 [2024-05-15 03:07:04.740145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.726 [2024-05-15 03:07:04.741971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.726 [2024-05-15 03:07:04.741998] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.726 pt1 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:33.726 03:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:33.983 malloc2 00:12:33.983 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.241 [2024-05-15 03:07:05.254152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.241 [2024-05-15 03:07:05.254196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.241 [2024-05-15 03:07:05.254216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d0a5f0 00:12:34.241 [2024-05-15 03:07:05.254226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.241 [2024-05-15 03:07:05.255837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.241 [2024-05-15 03:07:05.255870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.241 pt2 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.241 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:34.499 malloc3 00:12:34.499 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:34.756 [2024-05-15 03:07:05.760036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:34.756 [2024-05-15 03:07:05.760077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.756 [2024-05-15 03:07:05.760094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eaf900 00:12:34.756 [2024-05-15 03:07:05.760104] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.756 [2024-05-15 03:07:05.761585] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.756 [2024-05-15 03:07:05.761611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:34.756 pt3 00:12:34.756 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.756 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.756 03:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:35.014 [2024-05-15 03:07:06.016733] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.014 [2024-05-15 03:07:06.017988] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.014 [2024-05-15 03:07:06.018042] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:35.014 [2024-05-15 03:07:06.018198] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eb20f0 00:12:35.014 [2024-05-15 03:07:06.018208] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:35.014 [2024-05-15 03:07:06.018400] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d0af30 00:12:35.014 [2024-05-15 03:07:06.018546] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eb20f0 00:12:35.014 [2024-05-15 03:07:06.018554] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1eb20f0 00:12:35.014 [2024-05-15 03:07:06.018649] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.014 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.272 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:35.272 "name": "raid_bdev1", 00:12:35.272 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:35.272 "strip_size_kb": 64, 00:12:35.272 "state": "online", 00:12:35.272 "raid_level": "raid0", 00:12:35.272 "superblock": true, 00:12:35.272 "num_base_bdevs": 3, 00:12:35.272 "num_base_bdevs_discovered": 3, 00:12:35.272 "num_base_bdevs_operational": 3, 00:12:35.272 "base_bdevs_list": [ 00:12:35.272 { 00:12:35.272 "name": "pt1", 00:12:35.272 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:35.272 "is_configured": true, 00:12:35.272 "data_offset": 2048, 00:12:35.272 "data_size": 63488 00:12:35.272 }, 00:12:35.272 { 00:12:35.272 "name": "pt2", 00:12:35.272 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:35.272 "is_configured": true, 00:12:35.272 "data_offset": 2048, 00:12:35.272 "data_size": 63488 00:12:35.272 }, 00:12:35.272 { 00:12:35.272 "name": "pt3", 00:12:35.272 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:35.272 "is_configured": true, 00:12:35.272 "data_offset": 2048, 00:12:35.272 "data_size": 63488 00:12:35.272 } 00:12:35.272 ] 00:12:35.272 }' 00:12:35.272 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:35.272 03:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:35.837 03:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:36.095 [2024-05-15 03:07:07.148007] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.095 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:36.095 "name": "raid_bdev1", 00:12:36.095 "aliases": [ 00:12:36.095 "2e1bad3e-42a9-462c-bdda-81a528b0644f" 00:12:36.095 ], 00:12:36.095 "product_name": "Raid Volume", 00:12:36.095 "block_size": 512, 00:12:36.095 "num_blocks": 190464, 00:12:36.095 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:36.095 "assigned_rate_limits": { 00:12:36.095 "rw_ios_per_sec": 0, 00:12:36.095 "rw_mbytes_per_sec": 0, 00:12:36.095 "r_mbytes_per_sec": 0, 00:12:36.095 "w_mbytes_per_sec": 0 00:12:36.095 }, 00:12:36.095 "claimed": false, 00:12:36.095 "zoned": false, 00:12:36.095 "supported_io_types": { 00:12:36.095 "read": true, 00:12:36.095 "write": true, 00:12:36.095 "unmap": true, 00:12:36.095 "write_zeroes": true, 00:12:36.095 "flush": true, 00:12:36.095 "reset": true, 00:12:36.095 "compare": false, 00:12:36.095 "compare_and_write": false, 00:12:36.095 "abort": false, 00:12:36.095 "nvme_admin": false, 00:12:36.095 "nvme_io": false 00:12:36.095 }, 00:12:36.095 "memory_domains": [ 00:12:36.095 { 00:12:36.095 "dma_device_id": "system", 00:12:36.095 "dma_device_type": 1 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.095 "dma_device_type": 2 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "dma_device_id": "system", 00:12:36.095 "dma_device_type": 1 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.095 "dma_device_type": 2 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "dma_device_id": "system", 00:12:36.095 "dma_device_type": 1 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.095 "dma_device_type": 2 00:12:36.095 } 00:12:36.095 ], 00:12:36.095 "driver_specific": { 00:12:36.095 "raid": { 00:12:36.095 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:36.095 "strip_size_kb": 64, 00:12:36.095 "state": "online", 00:12:36.095 "raid_level": "raid0", 00:12:36.095 "superblock": true, 00:12:36.095 "num_base_bdevs": 3, 00:12:36.095 "num_base_bdevs_discovered": 3, 00:12:36.095 "num_base_bdevs_operational": 3, 00:12:36.095 "base_bdevs_list": [ 00:12:36.095 { 00:12:36.095 "name": "pt1", 00:12:36.095 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:36.095 "is_configured": true, 00:12:36.095 "data_offset": 2048, 00:12:36.095 "data_size": 63488 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "name": "pt2", 00:12:36.095 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:36.095 "is_configured": true, 00:12:36.095 "data_offset": 2048, 00:12:36.095 "data_size": 63488 00:12:36.095 }, 00:12:36.095 { 00:12:36.095 "name": "pt3", 00:12:36.095 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:36.095 "is_configured": true, 00:12:36.095 "data_offset": 2048, 00:12:36.095 "data_size": 63488 00:12:36.095 } 00:12:36.095 ] 00:12:36.095 } 00:12:36.095 } 00:12:36.095 }' 00:12:36.095 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.095 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:36.095 pt2 00:12:36.095 pt3' 00:12:36.095 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:36.095 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:36.096 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:36.360 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:36.360 "name": "pt1", 00:12:36.360 "aliases": [ 00:12:36.360 "f70c9bee-f6c1-5706-94b9-81384cfc7264" 00:12:36.360 ], 00:12:36.360 "product_name": "passthru", 00:12:36.360 "block_size": 512, 00:12:36.360 "num_blocks": 65536, 00:12:36.360 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:36.360 "assigned_rate_limits": { 00:12:36.360 "rw_ios_per_sec": 0, 00:12:36.360 "rw_mbytes_per_sec": 0, 00:12:36.360 "r_mbytes_per_sec": 0, 00:12:36.360 "w_mbytes_per_sec": 0 00:12:36.360 }, 00:12:36.360 "claimed": true, 00:12:36.360 "claim_type": "exclusive_write", 00:12:36.360 "zoned": false, 00:12:36.360 "supported_io_types": { 00:12:36.360 "read": true, 00:12:36.360 "write": true, 00:12:36.360 "unmap": true, 00:12:36.360 "write_zeroes": true, 00:12:36.360 "flush": true, 00:12:36.360 "reset": true, 00:12:36.360 "compare": false, 00:12:36.360 "compare_and_write": false, 00:12:36.360 "abort": true, 00:12:36.360 "nvme_admin": false, 00:12:36.360 "nvme_io": false 00:12:36.360 }, 00:12:36.360 "memory_domains": [ 00:12:36.360 { 00:12:36.360 "dma_device_id": "system", 00:12:36.360 "dma_device_type": 1 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.360 "dma_device_type": 2 00:12:36.360 } 00:12:36.360 ], 00:12:36.360 "driver_specific": { 00:12:36.360 "passthru": { 00:12:36.360 "name": "pt1", 00:12:36.360 "base_bdev_name": "malloc1" 00:12:36.360 } 00:12:36.360 } 00:12:36.360 }' 00:12:36.360 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:36.618 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:36.876 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:36.876 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:36.876 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:36.876 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:36.876 03:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:37.135 "name": "pt2", 00:12:37.135 "aliases": [ 00:12:37.135 "fcb171b3-b7d3-5372-85a8-2eeeadff9921" 00:12:37.135 ], 00:12:37.135 "product_name": "passthru", 00:12:37.135 "block_size": 512, 00:12:37.135 "num_blocks": 65536, 00:12:37.135 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:37.135 "assigned_rate_limits": { 00:12:37.135 "rw_ios_per_sec": 0, 00:12:37.135 "rw_mbytes_per_sec": 0, 00:12:37.135 "r_mbytes_per_sec": 0, 00:12:37.135 "w_mbytes_per_sec": 0 00:12:37.135 }, 00:12:37.135 "claimed": true, 00:12:37.135 "claim_type": "exclusive_write", 00:12:37.135 "zoned": false, 00:12:37.135 "supported_io_types": { 00:12:37.135 "read": true, 00:12:37.135 "write": true, 00:12:37.135 "unmap": true, 00:12:37.135 "write_zeroes": true, 00:12:37.135 "flush": true, 00:12:37.135 "reset": true, 00:12:37.135 "compare": false, 00:12:37.135 "compare_and_write": false, 00:12:37.135 "abort": true, 00:12:37.135 "nvme_admin": false, 00:12:37.135 "nvme_io": false 00:12:37.135 }, 00:12:37.135 "memory_domains": [ 00:12:37.135 { 00:12:37.135 "dma_device_id": "system", 00:12:37.135 "dma_device_type": 1 00:12:37.135 }, 00:12:37.135 { 00:12:37.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.135 "dma_device_type": 2 00:12:37.135 } 00:12:37.135 ], 00:12:37.135 "driver_specific": { 00:12:37.135 "passthru": { 00:12:37.135 "name": "pt2", 00:12:37.135 "base_bdev_name": "malloc2" 00:12:37.135 } 00:12:37.135 } 00:12:37.135 }' 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.135 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:37.394 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:37.652 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:37.652 "name": "pt3", 00:12:37.652 "aliases": [ 00:12:37.652 "0a747402-d572-5c22-811d-7b7486bf708b" 00:12:37.652 ], 00:12:37.652 "product_name": "passthru", 00:12:37.652 "block_size": 512, 00:12:37.652 "num_blocks": 65536, 00:12:37.652 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:37.652 "assigned_rate_limits": { 00:12:37.652 "rw_ios_per_sec": 0, 00:12:37.652 "rw_mbytes_per_sec": 0, 00:12:37.652 "r_mbytes_per_sec": 0, 00:12:37.652 "w_mbytes_per_sec": 0 00:12:37.652 }, 00:12:37.652 "claimed": true, 00:12:37.652 "claim_type": "exclusive_write", 00:12:37.652 "zoned": false, 00:12:37.652 "supported_io_types": { 00:12:37.652 "read": true, 00:12:37.652 "write": true, 00:12:37.652 "unmap": true, 00:12:37.652 "write_zeroes": true, 00:12:37.652 "flush": true, 00:12:37.652 "reset": true, 00:12:37.652 "compare": false, 00:12:37.652 "compare_and_write": false, 00:12:37.652 "abort": true, 00:12:37.652 "nvme_admin": false, 00:12:37.652 "nvme_io": false 00:12:37.652 }, 00:12:37.652 "memory_domains": [ 00:12:37.652 { 00:12:37.652 "dma_device_id": "system", 00:12:37.652 "dma_device_type": 1 00:12:37.652 }, 00:12:37.652 { 00:12:37.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.652 "dma_device_type": 2 00:12:37.652 } 00:12:37.652 ], 00:12:37.652 "driver_specific": { 00:12:37.652 "passthru": { 00:12:37.652 "name": "pt3", 00:12:37.652 "base_bdev_name": "malloc3" 00:12:37.652 } 00:12:37.652 } 00:12:37.652 }' 00:12:37.652 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.652 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.910 03:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.910 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.910 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:38.168 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:38.169 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:38.169 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:38.169 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:38.427 [2024-05-15 03:07:09.353933] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.427 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e1bad3e-42a9-462c-bdda-81a528b0644f 00:12:38.427 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2e1bad3e-42a9-462c-bdda-81a528b0644f ']' 00:12:38.427 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:38.685 [2024-05-15 03:07:09.606330] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.685 [2024-05-15 03:07:09.606346] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.685 [2024-05-15 03:07:09.606393] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.685 [2024-05-15 03:07:09.606443] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.685 [2024-05-15 03:07:09.606451] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eb20f0 name raid_bdev1, state offline 00:12:38.685 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.685 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:38.943 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:38.943 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:38.943 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.943 03:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:39.202 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:39.202 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:39.460 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:39.460 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:39.719 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:39.719 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.977 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:39.978 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:39.978 03:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:39.978 [2024-05-15 03:07:11.134343] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:39.978 [2024-05-15 03:07:11.135760] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:39.978 [2024-05-15 03:07:11.135803] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:39.978 [2024-05-15 03:07:11.135856] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:39.978 [2024-05-15 03:07:11.135892] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:39.978 [2024-05-15 03:07:11.135912] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:39.978 [2024-05-15 03:07:11.135926] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.978 [2024-05-15 03:07:11.135934] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eb2450 name raid_bdev1, state configuring 00:12:40.236 request: 00:12:40.236 { 00:12:40.236 "name": "raid_bdev1", 00:12:40.236 "raid_level": "raid0", 00:12:40.236 "base_bdevs": [ 00:12:40.236 "malloc1", 00:12:40.236 "malloc2", 00:12:40.236 "malloc3" 00:12:40.236 ], 00:12:40.236 "superblock": false, 00:12:40.236 "strip_size_kb": 64, 00:12:40.236 "method": "bdev_raid_create", 00:12:40.236 "req_id": 1 00:12:40.236 } 00:12:40.236 Got JSON-RPC error response 00:12:40.236 response: 00:12:40.236 { 00:12:40.236 "code": -17, 00:12:40.236 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:40.236 } 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.236 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:40.494 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:40.494 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:40.494 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.494 [2024-05-15 03:07:11.639628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.494 [2024-05-15 03:07:11.639670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.494 [2024-05-15 03:07:11.639687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ead320 00:12:40.494 [2024-05-15 03:07:11.639696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.494 [2024-05-15 03:07:11.641373] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.494 [2024-05-15 03:07:11.641398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.494 [2024-05-15 03:07:11.641459] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:40.494 [2024-05-15 03:07:11.641482] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.494 pt1 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.752 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.009 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:41.009 "name": "raid_bdev1", 00:12:41.009 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:41.009 "strip_size_kb": 64, 00:12:41.009 "state": "configuring", 00:12:41.009 "raid_level": "raid0", 00:12:41.009 "superblock": true, 00:12:41.009 "num_base_bdevs": 3, 00:12:41.009 "num_base_bdevs_discovered": 1, 00:12:41.009 "num_base_bdevs_operational": 3, 00:12:41.009 "base_bdevs_list": [ 00:12:41.009 { 00:12:41.009 "name": "pt1", 00:12:41.009 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:41.009 "is_configured": true, 00:12:41.009 "data_offset": 2048, 00:12:41.009 "data_size": 63488 00:12:41.009 }, 00:12:41.009 { 00:12:41.009 "name": null, 00:12:41.009 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:41.009 "is_configured": false, 00:12:41.009 "data_offset": 2048, 00:12:41.009 "data_size": 63488 00:12:41.009 }, 00:12:41.009 { 00:12:41.009 "name": null, 00:12:41.009 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:41.009 "is_configured": false, 00:12:41.009 "data_offset": 2048, 00:12:41.009 "data_size": 63488 00:12:41.009 } 00:12:41.009 ] 00:12:41.009 }' 00:12:41.009 03:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:41.009 03:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.601 03:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:41.601 03:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.858 [2024-05-15 03:07:12.770670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.859 [2024-05-15 03:07:12.770713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.859 [2024-05-15 03:07:12.770732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d00620 00:12:41.859 [2024-05-15 03:07:12.770742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.859 [2024-05-15 03:07:12.771075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.859 [2024-05-15 03:07:12.771091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.859 [2024-05-15 03:07:12.771147] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:41.859 [2024-05-15 03:07:12.771166] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.859 pt2 00:12:41.859 03:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:42.116 [2024-05-15 03:07:13.027361] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.116 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.374 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.374 "name": "raid_bdev1", 00:12:42.374 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:42.374 "strip_size_kb": 64, 00:12:42.374 "state": "configuring", 00:12:42.374 "raid_level": "raid0", 00:12:42.374 "superblock": true, 00:12:42.374 "num_base_bdevs": 3, 00:12:42.374 "num_base_bdevs_discovered": 1, 00:12:42.374 "num_base_bdevs_operational": 3, 00:12:42.374 "base_bdevs_list": [ 00:12:42.374 { 00:12:42.374 "name": "pt1", 00:12:42.375 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:42.375 "is_configured": true, 00:12:42.375 "data_offset": 2048, 00:12:42.375 "data_size": 63488 00:12:42.375 }, 00:12:42.375 { 00:12:42.375 "name": null, 00:12:42.375 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:42.375 "is_configured": false, 00:12:42.375 "data_offset": 2048, 00:12:42.375 "data_size": 63488 00:12:42.375 }, 00:12:42.375 { 00:12:42.375 "name": null, 00:12:42.375 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:42.375 "is_configured": false, 00:12:42.375 "data_offset": 2048, 00:12:42.375 "data_size": 63488 00:12:42.375 } 00:12:42.375 ] 00:12:42.375 }' 00:12:42.375 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.375 03:07:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.976 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:42.976 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.976 03:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.234 [2024-05-15 03:07:14.158375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.234 [2024-05-15 03:07:14.158419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.234 [2024-05-15 03:07:14.158436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d01870 00:12:43.234 [2024-05-15 03:07:14.158445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.234 [2024-05-15 03:07:14.158775] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.234 [2024-05-15 03:07:14.158791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.234 [2024-05-15 03:07:14.158847] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:43.234 [2024-05-15 03:07:14.158873] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.234 pt2 00:12:43.234 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:43.234 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:43.234 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:43.492 [2024-05-15 03:07:14.419084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:43.492 [2024-05-15 03:07:14.419116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.492 [2024-05-15 03:07:14.419133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eaec50 00:12:43.492 [2024-05-15 03:07:14.419142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.492 [2024-05-15 03:07:14.419439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.492 [2024-05-15 03:07:14.419455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:43.492 [2024-05-15 03:07:14.419502] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:43.492 [2024-05-15 03:07:14.419518] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:43.492 [2024-05-15 03:07:14.419621] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d00a20 00:12:43.492 [2024-05-15 03:07:14.419630] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:43.492 [2024-05-15 03:07:14.419802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d04910 00:12:43.492 [2024-05-15 03:07:14.419942] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d00a20 00:12:43.492 [2024-05-15 03:07:14.419951] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d00a20 00:12:43.492 [2024-05-15 03:07:14.420047] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.492 pt3 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.492 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.750 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:43.750 "name": "raid_bdev1", 00:12:43.750 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:43.750 "strip_size_kb": 64, 00:12:43.750 "state": "online", 00:12:43.750 "raid_level": "raid0", 00:12:43.750 "superblock": true, 00:12:43.750 "num_base_bdevs": 3, 00:12:43.750 "num_base_bdevs_discovered": 3, 00:12:43.750 "num_base_bdevs_operational": 3, 00:12:43.750 "base_bdevs_list": [ 00:12:43.750 { 00:12:43.750 "name": "pt1", 00:12:43.750 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:43.750 "is_configured": true, 00:12:43.750 "data_offset": 2048, 00:12:43.750 "data_size": 63488 00:12:43.750 }, 00:12:43.750 { 00:12:43.750 "name": "pt2", 00:12:43.750 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:43.750 "is_configured": true, 00:12:43.750 "data_offset": 2048, 00:12:43.750 "data_size": 63488 00:12:43.750 }, 00:12:43.750 { 00:12:43.750 "name": "pt3", 00:12:43.750 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:43.750 "is_configured": true, 00:12:43.750 "data_offset": 2048, 00:12:43.750 "data_size": 63488 00:12:43.750 } 00:12:43.750 ] 00:12:43.750 }' 00:12:43.750 03:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:43.750 03:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:44.315 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:44.573 [2024-05-15 03:07:15.558418] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:44.573 "name": "raid_bdev1", 00:12:44.573 "aliases": [ 00:12:44.573 "2e1bad3e-42a9-462c-bdda-81a528b0644f" 00:12:44.573 ], 00:12:44.573 "product_name": "Raid Volume", 00:12:44.573 "block_size": 512, 00:12:44.573 "num_blocks": 190464, 00:12:44.573 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:44.573 "assigned_rate_limits": { 00:12:44.573 "rw_ios_per_sec": 0, 00:12:44.573 "rw_mbytes_per_sec": 0, 00:12:44.573 "r_mbytes_per_sec": 0, 00:12:44.573 "w_mbytes_per_sec": 0 00:12:44.573 }, 00:12:44.573 "claimed": false, 00:12:44.573 "zoned": false, 00:12:44.573 "supported_io_types": { 00:12:44.573 "read": true, 00:12:44.573 "write": true, 00:12:44.573 "unmap": true, 00:12:44.573 "write_zeroes": true, 00:12:44.573 "flush": true, 00:12:44.573 "reset": true, 00:12:44.573 "compare": false, 00:12:44.573 "compare_and_write": false, 00:12:44.573 "abort": false, 00:12:44.573 "nvme_admin": false, 00:12:44.573 "nvme_io": false 00:12:44.573 }, 00:12:44.573 "memory_domains": [ 00:12:44.573 { 00:12:44.573 "dma_device_id": "system", 00:12:44.573 "dma_device_type": 1 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.573 "dma_device_type": 2 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "dma_device_id": "system", 00:12:44.573 "dma_device_type": 1 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.573 "dma_device_type": 2 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "dma_device_id": "system", 00:12:44.573 "dma_device_type": 1 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.573 "dma_device_type": 2 00:12:44.573 } 00:12:44.573 ], 00:12:44.573 "driver_specific": { 00:12:44.573 "raid": { 00:12:44.573 "uuid": "2e1bad3e-42a9-462c-bdda-81a528b0644f", 00:12:44.573 "strip_size_kb": 64, 00:12:44.573 "state": "online", 00:12:44.573 "raid_level": "raid0", 00:12:44.573 "superblock": true, 00:12:44.573 "num_base_bdevs": 3, 00:12:44.573 "num_base_bdevs_discovered": 3, 00:12:44.573 "num_base_bdevs_operational": 3, 00:12:44.573 "base_bdevs_list": [ 00:12:44.573 { 00:12:44.573 "name": "pt1", 00:12:44.573 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:44.573 "is_configured": true, 00:12:44.573 "data_offset": 2048, 00:12:44.573 "data_size": 63488 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "name": "pt2", 00:12:44.573 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:44.573 "is_configured": true, 00:12:44.573 "data_offset": 2048, 00:12:44.573 "data_size": 63488 00:12:44.573 }, 00:12:44.573 { 00:12:44.573 "name": "pt3", 00:12:44.573 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:44.573 "is_configured": true, 00:12:44.573 "data_offset": 2048, 00:12:44.573 "data_size": 63488 00:12:44.573 } 00:12:44.573 ] 00:12:44.573 } 00:12:44.573 } 00:12:44.573 }' 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:44.573 pt2 00:12:44.573 pt3' 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:44.573 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:44.831 "name": "pt1", 00:12:44.831 "aliases": [ 00:12:44.831 "f70c9bee-f6c1-5706-94b9-81384cfc7264" 00:12:44.831 ], 00:12:44.831 "product_name": "passthru", 00:12:44.831 "block_size": 512, 00:12:44.831 "num_blocks": 65536, 00:12:44.831 "uuid": "f70c9bee-f6c1-5706-94b9-81384cfc7264", 00:12:44.831 "assigned_rate_limits": { 00:12:44.831 "rw_ios_per_sec": 0, 00:12:44.831 "rw_mbytes_per_sec": 0, 00:12:44.831 "r_mbytes_per_sec": 0, 00:12:44.831 "w_mbytes_per_sec": 0 00:12:44.831 }, 00:12:44.831 "claimed": true, 00:12:44.831 "claim_type": "exclusive_write", 00:12:44.831 "zoned": false, 00:12:44.831 "supported_io_types": { 00:12:44.831 "read": true, 00:12:44.831 "write": true, 00:12:44.831 "unmap": true, 00:12:44.831 "write_zeroes": true, 00:12:44.831 "flush": true, 00:12:44.831 "reset": true, 00:12:44.831 "compare": false, 00:12:44.831 "compare_and_write": false, 00:12:44.831 "abort": true, 00:12:44.831 "nvme_admin": false, 00:12:44.831 "nvme_io": false 00:12:44.831 }, 00:12:44.831 "memory_domains": [ 00:12:44.831 { 00:12:44.831 "dma_device_id": "system", 00:12:44.831 "dma_device_type": 1 00:12:44.831 }, 00:12:44.831 { 00:12:44.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.831 "dma_device_type": 2 00:12:44.831 } 00:12:44.831 ], 00:12:44.831 "driver_specific": { 00:12:44.831 "passthru": { 00:12:44.831 "name": "pt1", 00:12:44.831 "base_bdev_name": "malloc1" 00:12:44.831 } 00:12:44.831 } 00:12:44.831 }' 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:44.831 03:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:45.089 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:45.346 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:45.346 "name": "pt2", 00:12:45.346 "aliases": [ 00:12:45.347 "fcb171b3-b7d3-5372-85a8-2eeeadff9921" 00:12:45.347 ], 00:12:45.347 "product_name": "passthru", 00:12:45.347 "block_size": 512, 00:12:45.347 "num_blocks": 65536, 00:12:45.347 "uuid": "fcb171b3-b7d3-5372-85a8-2eeeadff9921", 00:12:45.347 "assigned_rate_limits": { 00:12:45.347 "rw_ios_per_sec": 0, 00:12:45.347 "rw_mbytes_per_sec": 0, 00:12:45.347 "r_mbytes_per_sec": 0, 00:12:45.347 "w_mbytes_per_sec": 0 00:12:45.347 }, 00:12:45.347 "claimed": true, 00:12:45.347 "claim_type": "exclusive_write", 00:12:45.347 "zoned": false, 00:12:45.347 "supported_io_types": { 00:12:45.347 "read": true, 00:12:45.347 "write": true, 00:12:45.347 "unmap": true, 00:12:45.347 "write_zeroes": true, 00:12:45.347 "flush": true, 00:12:45.347 "reset": true, 00:12:45.347 "compare": false, 00:12:45.347 "compare_and_write": false, 00:12:45.347 "abort": true, 00:12:45.347 "nvme_admin": false, 00:12:45.347 "nvme_io": false 00:12:45.347 }, 00:12:45.347 "memory_domains": [ 00:12:45.347 { 00:12:45.347 "dma_device_id": "system", 00:12:45.347 "dma_device_type": 1 00:12:45.347 }, 00:12:45.347 { 00:12:45.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.347 "dma_device_type": 2 00:12:45.347 } 00:12:45.347 ], 00:12:45.347 "driver_specific": { 00:12:45.347 "passthru": { 00:12:45.347 "name": "pt2", 00:12:45.347 "base_bdev_name": "malloc2" 00:12:45.347 } 00:12:45.347 } 00:12:45.347 }' 00:12:45.347 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:45.347 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:45.605 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:45.862 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:45.862 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:45.862 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:45.862 03:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:46.121 "name": "pt3", 00:12:46.121 "aliases": [ 00:12:46.121 "0a747402-d572-5c22-811d-7b7486bf708b" 00:12:46.121 ], 00:12:46.121 "product_name": "passthru", 00:12:46.121 "block_size": 512, 00:12:46.121 "num_blocks": 65536, 00:12:46.121 "uuid": "0a747402-d572-5c22-811d-7b7486bf708b", 00:12:46.121 "assigned_rate_limits": { 00:12:46.121 "rw_ios_per_sec": 0, 00:12:46.121 "rw_mbytes_per_sec": 0, 00:12:46.121 "r_mbytes_per_sec": 0, 00:12:46.121 "w_mbytes_per_sec": 0 00:12:46.121 }, 00:12:46.121 "claimed": true, 00:12:46.121 "claim_type": "exclusive_write", 00:12:46.121 "zoned": false, 00:12:46.121 "supported_io_types": { 00:12:46.121 "read": true, 00:12:46.121 "write": true, 00:12:46.121 "unmap": true, 00:12:46.121 "write_zeroes": true, 00:12:46.121 "flush": true, 00:12:46.121 "reset": true, 00:12:46.121 "compare": false, 00:12:46.121 "compare_and_write": false, 00:12:46.121 "abort": true, 00:12:46.121 "nvme_admin": false, 00:12:46.121 "nvme_io": false 00:12:46.121 }, 00:12:46.121 "memory_domains": [ 00:12:46.121 { 00:12:46.121 "dma_device_id": "system", 00:12:46.121 "dma_device_type": 1 00:12:46.121 }, 00:12:46.121 { 00:12:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.121 "dma_device_type": 2 00:12:46.121 } 00:12:46.121 ], 00:12:46.121 "driver_specific": { 00:12:46.121 "passthru": { 00:12:46.121 "name": "pt3", 00:12:46.121 "base_bdev_name": "malloc3" 00:12:46.121 } 00:12:46.121 } 00:12:46.121 }' 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:46.121 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:46.379 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:46.379 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:46.379 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:46.380 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:46.380 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:46.380 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:46.638 [2024-05-15 03:07:17.599885] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2e1bad3e-42a9-462c-bdda-81a528b0644f '!=' 2e1bad3e-42a9-462c-bdda-81a528b0644f ']' 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4070913 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4070913 ']' 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4070913 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4070913 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4070913' 00:12:46.638 killing process with pid 4070913 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4070913 00:12:46.638 [2024-05-15 03:07:17.665543] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.638 [2024-05-15 03:07:17.665602] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.638 [2024-05-15 03:07:17.665652] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.638 [2024-05-15 03:07:17.665661] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d00a20 name raid_bdev1, state offline 00:12:46.638 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4070913 00:12:46.638 [2024-05-15 03:07:17.690879] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.897 03:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:12:46.897 00:12:46.897 real 0m14.671s 00:12:46.897 user 0m26.986s 00:12:46.897 sys 0m2.113s 00:12:46.897 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.897 03:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.897 ************************************ 00:12:46.897 END TEST raid_superblock_test 00:12:46.897 ************************************ 00:12:46.897 03:07:17 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:12:46.897 03:07:17 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:46.897 03:07:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:46.897 03:07:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.897 03:07:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.897 ************************************ 00:12:46.897 START TEST raid_state_function_test 00:12:46.897 ************************************ 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:12:46.897 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4073715 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4073715' 00:12:46.898 Process raid pid: 4073715 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4073715 /var/tmp/spdk-raid.sock 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4073715 ']' 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:46.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.898 03:07:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.898 [2024-05-15 03:07:18.040248] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:46.898 [2024-05-15 03:07:18.040299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.156 [2024-05-15 03:07:18.130182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.156 [2024-05-15 03:07:18.223400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.156 [2024-05-15 03:07:18.286065] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.156 [2024-05-15 03:07:18.286096] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:47.414 [2024-05-15 03:07:18.488448] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.414 [2024-05-15 03:07:18.488484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.414 [2024-05-15 03:07:18.488493] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.414 [2024-05-15 03:07:18.488502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.414 [2024-05-15 03:07:18.488510] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.414 [2024-05-15 03:07:18.488518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.414 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.672 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:47.672 "name": "Existed_Raid", 00:12:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.672 "strip_size_kb": 64, 00:12:47.672 "state": "configuring", 00:12:47.672 "raid_level": "concat", 00:12:47.672 "superblock": false, 00:12:47.672 "num_base_bdevs": 3, 00:12:47.672 "num_base_bdevs_discovered": 0, 00:12:47.672 "num_base_bdevs_operational": 3, 00:12:47.672 "base_bdevs_list": [ 00:12:47.672 { 00:12:47.672 "name": "BaseBdev1", 00:12:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.672 "is_configured": false, 00:12:47.672 "data_offset": 0, 00:12:47.672 "data_size": 0 00:12:47.672 }, 00:12:47.672 { 00:12:47.672 "name": "BaseBdev2", 00:12:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.672 "is_configured": false, 00:12:47.672 "data_offset": 0, 00:12:47.672 "data_size": 0 00:12:47.672 }, 00:12:47.672 { 00:12:47.672 "name": "BaseBdev3", 00:12:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.672 "is_configured": false, 00:12:47.672 "data_offset": 0, 00:12:47.672 "data_size": 0 00:12:47.672 } 00:12:47.672 ] 00:12:47.672 }' 00:12:47.672 03:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:47.672 03:07:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 03:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.496 [2024-05-15 03:07:19.571201] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.496 [2024-05-15 03:07:19.571229] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c86de0 name Existed_Raid, state configuring 00:12:48.496 03:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:48.755 [2024-05-15 03:07:19.823901] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.755 [2024-05-15 03:07:19.823933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.755 [2024-05-15 03:07:19.823942] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.755 [2024-05-15 03:07:19.823950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.755 [2024-05-15 03:07:19.823958] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.755 [2024-05-15 03:07:19.823966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.755 03:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.013 [2024-05-15 03:07:20.090336] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.013 BaseBdev1 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:49.013 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.271 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.529 [ 00:12:49.529 { 00:12:49.529 "name": "BaseBdev1", 00:12:49.529 "aliases": [ 00:12:49.529 "22180557-444c-47ac-a967-ff6faad515e9" 00:12:49.529 ], 00:12:49.529 "product_name": "Malloc disk", 00:12:49.529 "block_size": 512, 00:12:49.529 "num_blocks": 65536, 00:12:49.529 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:49.529 "assigned_rate_limits": { 00:12:49.529 "rw_ios_per_sec": 0, 00:12:49.529 "rw_mbytes_per_sec": 0, 00:12:49.529 "r_mbytes_per_sec": 0, 00:12:49.529 "w_mbytes_per_sec": 0 00:12:49.529 }, 00:12:49.529 "claimed": true, 00:12:49.529 "claim_type": "exclusive_write", 00:12:49.529 "zoned": false, 00:12:49.529 "supported_io_types": { 00:12:49.529 "read": true, 00:12:49.529 "write": true, 00:12:49.529 "unmap": true, 00:12:49.529 "write_zeroes": true, 00:12:49.529 "flush": true, 00:12:49.529 "reset": true, 00:12:49.529 "compare": false, 00:12:49.529 "compare_and_write": false, 00:12:49.529 "abort": true, 00:12:49.529 "nvme_admin": false, 00:12:49.529 "nvme_io": false 00:12:49.529 }, 00:12:49.529 "memory_domains": [ 00:12:49.529 { 00:12:49.529 "dma_device_id": "system", 00:12:49.529 "dma_device_type": 1 00:12:49.529 }, 00:12:49.529 { 00:12:49.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.529 "dma_device_type": 2 00:12:49.529 } 00:12:49.529 ], 00:12:49.529 "driver_specific": {} 00:12:49.529 } 00:12:49.529 ] 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.529 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.787 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:49.787 "name": "Existed_Raid", 00:12:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.787 "strip_size_kb": 64, 00:12:49.787 "state": "configuring", 00:12:49.787 "raid_level": "concat", 00:12:49.787 "superblock": false, 00:12:49.787 "num_base_bdevs": 3, 00:12:49.787 "num_base_bdevs_discovered": 1, 00:12:49.787 "num_base_bdevs_operational": 3, 00:12:49.787 "base_bdevs_list": [ 00:12:49.787 { 00:12:49.787 "name": "BaseBdev1", 00:12:49.787 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:49.787 "is_configured": true, 00:12:49.787 "data_offset": 0, 00:12:49.787 "data_size": 65536 00:12:49.787 }, 00:12:49.787 { 00:12:49.787 "name": "BaseBdev2", 00:12:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.787 "is_configured": false, 00:12:49.787 "data_offset": 0, 00:12:49.787 "data_size": 0 00:12:49.787 }, 00:12:49.787 { 00:12:49.787 "name": "BaseBdev3", 00:12:49.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.787 "is_configured": false, 00:12:49.787 "data_offset": 0, 00:12:49.787 "data_size": 0 00:12:49.787 } 00:12:49.787 ] 00:12:49.787 }' 00:12:49.787 03:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:49.787 03:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 03:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:50.608 [2024-05-15 03:07:21.734739] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.608 [2024-05-15 03:07:21.734778] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c866b0 name Existed_Raid, state configuring 00:12:50.608 03:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:50.865 [2024-05-15 03:07:21.987444] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.865 [2024-05-15 03:07:21.988981] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.866 [2024-05-15 03:07:21.989012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.866 [2024-05-15 03:07:21.989021] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.866 [2024-05-15 03:07:21.989029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.866 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.123 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.123 "name": "Existed_Raid", 00:12:51.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.123 "strip_size_kb": 64, 00:12:51.123 "state": "configuring", 00:12:51.123 "raid_level": "concat", 00:12:51.123 "superblock": false, 00:12:51.123 "num_base_bdevs": 3, 00:12:51.123 "num_base_bdevs_discovered": 1, 00:12:51.123 "num_base_bdevs_operational": 3, 00:12:51.123 "base_bdevs_list": [ 00:12:51.123 { 00:12:51.123 "name": "BaseBdev1", 00:12:51.123 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:51.123 "is_configured": true, 00:12:51.123 "data_offset": 0, 00:12:51.123 "data_size": 65536 00:12:51.123 }, 00:12:51.123 { 00:12:51.123 "name": "BaseBdev2", 00:12:51.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.123 "is_configured": false, 00:12:51.123 "data_offset": 0, 00:12:51.123 "data_size": 0 00:12:51.123 }, 00:12:51.123 { 00:12:51.123 "name": "BaseBdev3", 00:12:51.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.123 "is_configured": false, 00:12:51.123 "data_offset": 0, 00:12:51.123 "data_size": 0 00:12:51.123 } 00:12:51.123 ] 00:12:51.123 }' 00:12:51.123 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.123 03:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.053 03:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.053 [2024-05-15 03:07:23.117619] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.053 BaseBdev2 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:52.053 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.309 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.567 [ 00:12:52.567 { 00:12:52.567 "name": "BaseBdev2", 00:12:52.567 "aliases": [ 00:12:52.567 "b00797ff-bea8-45b4-ac32-7b0182f2fdd7" 00:12:52.567 ], 00:12:52.567 "product_name": "Malloc disk", 00:12:52.567 "block_size": 512, 00:12:52.567 "num_blocks": 65536, 00:12:52.567 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:52.567 "assigned_rate_limits": { 00:12:52.567 "rw_ios_per_sec": 0, 00:12:52.567 "rw_mbytes_per_sec": 0, 00:12:52.567 "r_mbytes_per_sec": 0, 00:12:52.567 "w_mbytes_per_sec": 0 00:12:52.567 }, 00:12:52.567 "claimed": true, 00:12:52.567 "claim_type": "exclusive_write", 00:12:52.567 "zoned": false, 00:12:52.567 "supported_io_types": { 00:12:52.567 "read": true, 00:12:52.567 "write": true, 00:12:52.567 "unmap": true, 00:12:52.567 "write_zeroes": true, 00:12:52.567 "flush": true, 00:12:52.567 "reset": true, 00:12:52.567 "compare": false, 00:12:52.567 "compare_and_write": false, 00:12:52.567 "abort": true, 00:12:52.567 "nvme_admin": false, 00:12:52.567 "nvme_io": false 00:12:52.567 }, 00:12:52.567 "memory_domains": [ 00:12:52.567 { 00:12:52.567 "dma_device_id": "system", 00:12:52.567 "dma_device_type": 1 00:12:52.567 }, 00:12:52.567 { 00:12:52.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.567 "dma_device_type": 2 00:12:52.567 } 00:12:52.567 ], 00:12:52.567 "driver_specific": {} 00:12:52.567 } 00:12:52.567 ] 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.567 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.825 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.825 "name": "Existed_Raid", 00:12:52.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.825 "strip_size_kb": 64, 00:12:52.825 "state": "configuring", 00:12:52.825 "raid_level": "concat", 00:12:52.825 "superblock": false, 00:12:52.825 "num_base_bdevs": 3, 00:12:52.825 "num_base_bdevs_discovered": 2, 00:12:52.825 "num_base_bdevs_operational": 3, 00:12:52.825 "base_bdevs_list": [ 00:12:52.825 { 00:12:52.825 "name": "BaseBdev1", 00:12:52.825 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:52.825 "is_configured": true, 00:12:52.825 "data_offset": 0, 00:12:52.825 "data_size": 65536 00:12:52.825 }, 00:12:52.825 { 00:12:52.825 "name": "BaseBdev2", 00:12:52.825 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:52.825 "is_configured": true, 00:12:52.825 "data_offset": 0, 00:12:52.825 "data_size": 65536 00:12:52.825 }, 00:12:52.825 { 00:12:52.825 "name": "BaseBdev3", 00:12:52.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.825 "is_configured": false, 00:12:52.825 "data_offset": 0, 00:12:52.825 "data_size": 0 00:12:52.825 } 00:12:52.825 ] 00:12:52.825 }' 00:12:52.825 03:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.825 03:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.390 03:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.956 [2024-05-15 03:07:24.845492] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.956 [2024-05-15 03:07:24.845526] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c87760 00:12:53.956 [2024-05-15 03:07:24.845533] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:53.956 [2024-05-15 03:07:24.845733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c9e690 00:12:53.956 [2024-05-15 03:07:24.845878] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c87760 00:12:53.956 [2024-05-15 03:07:24.845888] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c87760 00:12:53.956 [2024-05-15 03:07:24.846053] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.956 BaseBdev3 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:53.956 03:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.215 03:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:54.215 [ 00:12:54.215 { 00:12:54.215 "name": "BaseBdev3", 00:12:54.215 "aliases": [ 00:12:54.215 "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b" 00:12:54.215 ], 00:12:54.215 "product_name": "Malloc disk", 00:12:54.215 "block_size": 512, 00:12:54.215 "num_blocks": 65536, 00:12:54.215 "uuid": "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b", 00:12:54.215 "assigned_rate_limits": { 00:12:54.215 "rw_ios_per_sec": 0, 00:12:54.215 "rw_mbytes_per_sec": 0, 00:12:54.215 "r_mbytes_per_sec": 0, 00:12:54.215 "w_mbytes_per_sec": 0 00:12:54.215 }, 00:12:54.215 "claimed": true, 00:12:54.215 "claim_type": "exclusive_write", 00:12:54.215 "zoned": false, 00:12:54.215 "supported_io_types": { 00:12:54.215 "read": true, 00:12:54.215 "write": true, 00:12:54.215 "unmap": true, 00:12:54.215 "write_zeroes": true, 00:12:54.215 "flush": true, 00:12:54.215 "reset": true, 00:12:54.215 "compare": false, 00:12:54.215 "compare_and_write": false, 00:12:54.215 "abort": true, 00:12:54.215 "nvme_admin": false, 00:12:54.215 "nvme_io": false 00:12:54.215 }, 00:12:54.215 "memory_domains": [ 00:12:54.215 { 00:12:54.215 "dma_device_id": "system", 00:12:54.215 "dma_device_type": 1 00:12:54.215 }, 00:12:54.215 { 00:12:54.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.215 "dma_device_type": 2 00:12:54.215 } 00:12:54.215 ], 00:12:54.215 "driver_specific": {} 00:12:54.215 } 00:12:54.215 ] 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:54.473 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.474 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.474 "name": "Existed_Raid", 00:12:54.474 "uuid": "0bc34022-a44a-4055-888d-7e84c9d7ac92", 00:12:54.474 "strip_size_kb": 64, 00:12:54.474 "state": "online", 00:12:54.474 "raid_level": "concat", 00:12:54.474 "superblock": false, 00:12:54.474 "num_base_bdevs": 3, 00:12:54.474 "num_base_bdevs_discovered": 3, 00:12:54.474 "num_base_bdevs_operational": 3, 00:12:54.474 "base_bdevs_list": [ 00:12:54.474 { 00:12:54.474 "name": "BaseBdev1", 00:12:54.474 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:54.474 "is_configured": true, 00:12:54.474 "data_offset": 0, 00:12:54.474 "data_size": 65536 00:12:54.474 }, 00:12:54.474 { 00:12:54.474 "name": "BaseBdev2", 00:12:54.474 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:54.474 "is_configured": true, 00:12:54.474 "data_offset": 0, 00:12:54.474 "data_size": 65536 00:12:54.474 }, 00:12:54.474 { 00:12:54.474 "name": "BaseBdev3", 00:12:54.474 "uuid": "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b", 00:12:54.474 "is_configured": true, 00:12:54.474 "data_offset": 0, 00:12:54.474 "data_size": 65536 00:12:54.474 } 00:12:54.474 ] 00:12:54.474 }' 00:12:54.732 03:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.732 03:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:55.299 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:55.557 [2024-05-15 03:07:26.486170] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:55.557 "name": "Existed_Raid", 00:12:55.557 "aliases": [ 00:12:55.557 "0bc34022-a44a-4055-888d-7e84c9d7ac92" 00:12:55.557 ], 00:12:55.557 "product_name": "Raid Volume", 00:12:55.557 "block_size": 512, 00:12:55.557 "num_blocks": 196608, 00:12:55.557 "uuid": "0bc34022-a44a-4055-888d-7e84c9d7ac92", 00:12:55.557 "assigned_rate_limits": { 00:12:55.557 "rw_ios_per_sec": 0, 00:12:55.557 "rw_mbytes_per_sec": 0, 00:12:55.557 "r_mbytes_per_sec": 0, 00:12:55.557 "w_mbytes_per_sec": 0 00:12:55.557 }, 00:12:55.557 "claimed": false, 00:12:55.557 "zoned": false, 00:12:55.557 "supported_io_types": { 00:12:55.557 "read": true, 00:12:55.557 "write": true, 00:12:55.557 "unmap": true, 00:12:55.557 "write_zeroes": true, 00:12:55.557 "flush": true, 00:12:55.557 "reset": true, 00:12:55.557 "compare": false, 00:12:55.557 "compare_and_write": false, 00:12:55.557 "abort": false, 00:12:55.557 "nvme_admin": false, 00:12:55.557 "nvme_io": false 00:12:55.557 }, 00:12:55.557 "memory_domains": [ 00:12:55.557 { 00:12:55.557 "dma_device_id": "system", 00:12:55.557 "dma_device_type": 1 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.557 "dma_device_type": 2 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "system", 00:12:55.557 "dma_device_type": 1 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.557 "dma_device_type": 2 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "system", 00:12:55.557 "dma_device_type": 1 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.557 "dma_device_type": 2 00:12:55.557 } 00:12:55.557 ], 00:12:55.557 "driver_specific": { 00:12:55.557 "raid": { 00:12:55.557 "uuid": "0bc34022-a44a-4055-888d-7e84c9d7ac92", 00:12:55.557 "strip_size_kb": 64, 00:12:55.557 "state": "online", 00:12:55.557 "raid_level": "concat", 00:12:55.557 "superblock": false, 00:12:55.557 "num_base_bdevs": 3, 00:12:55.557 "num_base_bdevs_discovered": 3, 00:12:55.557 "num_base_bdevs_operational": 3, 00:12:55.557 "base_bdevs_list": [ 00:12:55.557 { 00:12:55.557 "name": "BaseBdev1", 00:12:55.557 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:55.557 "is_configured": true, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 65536 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "name": "BaseBdev2", 00:12:55.557 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:55.557 "is_configured": true, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 65536 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "name": "BaseBdev3", 00:12:55.557 "uuid": "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b", 00:12:55.557 "is_configured": true, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 65536 00:12:55.557 } 00:12:55.557 ] 00:12:55.557 } 00:12:55.557 } 00:12:55.557 }' 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:55.557 BaseBdev2 00:12:55.557 BaseBdev3' 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:55.557 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:55.816 "name": "BaseBdev1", 00:12:55.816 "aliases": [ 00:12:55.816 "22180557-444c-47ac-a967-ff6faad515e9" 00:12:55.816 ], 00:12:55.816 "product_name": "Malloc disk", 00:12:55.816 "block_size": 512, 00:12:55.816 "num_blocks": 65536, 00:12:55.816 "uuid": "22180557-444c-47ac-a967-ff6faad515e9", 00:12:55.816 "assigned_rate_limits": { 00:12:55.816 "rw_ios_per_sec": 0, 00:12:55.816 "rw_mbytes_per_sec": 0, 00:12:55.816 "r_mbytes_per_sec": 0, 00:12:55.816 "w_mbytes_per_sec": 0 00:12:55.816 }, 00:12:55.816 "claimed": true, 00:12:55.816 "claim_type": "exclusive_write", 00:12:55.816 "zoned": false, 00:12:55.816 "supported_io_types": { 00:12:55.816 "read": true, 00:12:55.816 "write": true, 00:12:55.816 "unmap": true, 00:12:55.816 "write_zeroes": true, 00:12:55.816 "flush": true, 00:12:55.816 "reset": true, 00:12:55.816 "compare": false, 00:12:55.816 "compare_and_write": false, 00:12:55.816 "abort": true, 00:12:55.816 "nvme_admin": false, 00:12:55.816 "nvme_io": false 00:12:55.816 }, 00:12:55.816 "memory_domains": [ 00:12:55.816 { 00:12:55.816 "dma_device_id": "system", 00:12:55.816 "dma_device_type": 1 00:12:55.816 }, 00:12:55.816 { 00:12:55.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.816 "dma_device_type": 2 00:12:55.816 } 00:12:55.816 ], 00:12:55.816 "driver_specific": {} 00:12:55.816 }' 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:55.816 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:56.075 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:56.075 03:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:56.075 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:56.331 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:56.331 "name": "BaseBdev2", 00:12:56.331 "aliases": [ 00:12:56.331 "b00797ff-bea8-45b4-ac32-7b0182f2fdd7" 00:12:56.331 ], 00:12:56.331 "product_name": "Malloc disk", 00:12:56.331 "block_size": 512, 00:12:56.331 "num_blocks": 65536, 00:12:56.331 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:56.331 "assigned_rate_limits": { 00:12:56.331 "rw_ios_per_sec": 0, 00:12:56.331 "rw_mbytes_per_sec": 0, 00:12:56.331 "r_mbytes_per_sec": 0, 00:12:56.331 "w_mbytes_per_sec": 0 00:12:56.331 }, 00:12:56.331 "claimed": true, 00:12:56.331 "claim_type": "exclusive_write", 00:12:56.331 "zoned": false, 00:12:56.331 "supported_io_types": { 00:12:56.331 "read": true, 00:12:56.331 "write": true, 00:12:56.331 "unmap": true, 00:12:56.331 "write_zeroes": true, 00:12:56.331 "flush": true, 00:12:56.331 "reset": true, 00:12:56.331 "compare": false, 00:12:56.331 "compare_and_write": false, 00:12:56.331 "abort": true, 00:12:56.331 "nvme_admin": false, 00:12:56.331 "nvme_io": false 00:12:56.331 }, 00:12:56.331 "memory_domains": [ 00:12:56.331 { 00:12:56.331 "dma_device_id": "system", 00:12:56.331 "dma_device_type": 1 00:12:56.331 }, 00:12:56.331 { 00:12:56.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.331 "dma_device_type": 2 00:12:56.331 } 00:12:56.331 ], 00:12:56.331 "driver_specific": {} 00:12:56.331 }' 00:12:56.331 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:56.588 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:56.846 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:56.846 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:56.846 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:56.846 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:56.846 03:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:57.103 "name": "BaseBdev3", 00:12:57.103 "aliases": [ 00:12:57.103 "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b" 00:12:57.103 ], 00:12:57.103 "product_name": "Malloc disk", 00:12:57.103 "block_size": 512, 00:12:57.103 "num_blocks": 65536, 00:12:57.103 "uuid": "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b", 00:12:57.103 "assigned_rate_limits": { 00:12:57.103 "rw_ios_per_sec": 0, 00:12:57.103 "rw_mbytes_per_sec": 0, 00:12:57.103 "r_mbytes_per_sec": 0, 00:12:57.103 "w_mbytes_per_sec": 0 00:12:57.103 }, 00:12:57.103 "claimed": true, 00:12:57.103 "claim_type": "exclusive_write", 00:12:57.103 "zoned": false, 00:12:57.103 "supported_io_types": { 00:12:57.103 "read": true, 00:12:57.103 "write": true, 00:12:57.103 "unmap": true, 00:12:57.103 "write_zeroes": true, 00:12:57.103 "flush": true, 00:12:57.103 "reset": true, 00:12:57.103 "compare": false, 00:12:57.103 "compare_and_write": false, 00:12:57.103 "abort": true, 00:12:57.103 "nvme_admin": false, 00:12:57.103 "nvme_io": false 00:12:57.103 }, 00:12:57.103 "memory_domains": [ 00:12:57.103 { 00:12:57.103 "dma_device_id": "system", 00:12:57.103 "dma_device_type": 1 00:12:57.103 }, 00:12:57.103 { 00:12:57.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.103 "dma_device_type": 2 00:12:57.103 } 00:12:57.103 ], 00:12:57.103 "driver_specific": {} 00:12:57.103 }' 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:57.103 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:57.104 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:57.104 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:57.361 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:57.362 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:57.362 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:57.362 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:57.362 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:57.362 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:57.619 [2024-05-15 03:07:28.671944] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.619 [2024-05-15 03:07:28.671970] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.619 [2024-05-15 03:07:28.672009] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.619 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.877 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:57.877 "name": "Existed_Raid", 00:12:57.877 "uuid": "0bc34022-a44a-4055-888d-7e84c9d7ac92", 00:12:57.877 "strip_size_kb": 64, 00:12:57.877 "state": "offline", 00:12:57.877 "raid_level": "concat", 00:12:57.877 "superblock": false, 00:12:57.877 "num_base_bdevs": 3, 00:12:57.877 "num_base_bdevs_discovered": 2, 00:12:57.877 "num_base_bdevs_operational": 2, 00:12:57.877 "base_bdevs_list": [ 00:12:57.877 { 00:12:57.877 "name": null, 00:12:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.877 "is_configured": false, 00:12:57.877 "data_offset": 0, 00:12:57.877 "data_size": 65536 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "BaseBdev2", 00:12:57.877 "uuid": "b00797ff-bea8-45b4-ac32-7b0182f2fdd7", 00:12:57.877 "is_configured": true, 00:12:57.877 "data_offset": 0, 00:12:57.877 "data_size": 65536 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "BaseBdev3", 00:12:57.877 "uuid": "0d7558d1-e0df-4f21-a7d4-060c1f6e9d3b", 00:12:57.877 "is_configured": true, 00:12:57.877 "data_offset": 0, 00:12:57.877 "data_size": 65536 00:12:57.877 } 00:12:57.877 ] 00:12:57.877 }' 00:12:57.877 03:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:57.877 03:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.442 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.442 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.442 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.442 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:58.700 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:58.700 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.700 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:58.959 [2024-05-15 03:07:29.900443] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.959 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.959 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.959 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.959 03:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:59.217 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:59.217 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.217 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:59.475 [2024-05-15 03:07:30.424347] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:59.475 [2024-05-15 03:07:30.424388] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c87760 name Existed_Raid, state offline 00:12:59.475 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.475 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.475 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.476 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:59.734 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.734 BaseBdev2 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:59.993 03:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.993 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:00.282 [ 00:13:00.282 { 00:13:00.282 "name": "BaseBdev2", 00:13:00.282 "aliases": [ 00:13:00.282 "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0" 00:13:00.282 ], 00:13:00.282 "product_name": "Malloc disk", 00:13:00.282 "block_size": 512, 00:13:00.282 "num_blocks": 65536, 00:13:00.282 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:00.282 "assigned_rate_limits": { 00:13:00.282 "rw_ios_per_sec": 0, 00:13:00.282 "rw_mbytes_per_sec": 0, 00:13:00.282 "r_mbytes_per_sec": 0, 00:13:00.282 "w_mbytes_per_sec": 0 00:13:00.282 }, 00:13:00.282 "claimed": false, 00:13:00.282 "zoned": false, 00:13:00.282 "supported_io_types": { 00:13:00.282 "read": true, 00:13:00.282 "write": true, 00:13:00.282 "unmap": true, 00:13:00.282 "write_zeroes": true, 00:13:00.282 "flush": true, 00:13:00.282 "reset": true, 00:13:00.282 "compare": false, 00:13:00.282 "compare_and_write": false, 00:13:00.282 "abort": true, 00:13:00.282 "nvme_admin": false, 00:13:00.282 "nvme_io": false 00:13:00.282 }, 00:13:00.282 "memory_domains": [ 00:13:00.282 { 00:13:00.282 "dma_device_id": "system", 00:13:00.282 "dma_device_type": 1 00:13:00.282 }, 00:13:00.282 { 00:13:00.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.282 "dma_device_type": 2 00:13:00.282 } 00:13:00.282 ], 00:13:00.282 "driver_specific": {} 00:13:00.282 } 00:13:00.282 ] 00:13:00.282 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:00.282 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:00.282 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:00.282 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:00.541 BaseBdev3 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:00.541 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:00.799 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:01.057 [ 00:13:01.057 { 00:13:01.057 "name": "BaseBdev3", 00:13:01.057 "aliases": [ 00:13:01.057 "aa6d281d-cd15-466b-850f-9c7a2b8fc13d" 00:13:01.057 ], 00:13:01.057 "product_name": "Malloc disk", 00:13:01.057 "block_size": 512, 00:13:01.057 "num_blocks": 65536, 00:13:01.057 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:01.057 "assigned_rate_limits": { 00:13:01.057 "rw_ios_per_sec": 0, 00:13:01.057 "rw_mbytes_per_sec": 0, 00:13:01.057 "r_mbytes_per_sec": 0, 00:13:01.057 "w_mbytes_per_sec": 0 00:13:01.057 }, 00:13:01.057 "claimed": false, 00:13:01.057 "zoned": false, 00:13:01.057 "supported_io_types": { 00:13:01.057 "read": true, 00:13:01.057 "write": true, 00:13:01.057 "unmap": true, 00:13:01.057 "write_zeroes": true, 00:13:01.057 "flush": true, 00:13:01.057 "reset": true, 00:13:01.057 "compare": false, 00:13:01.057 "compare_and_write": false, 00:13:01.057 "abort": true, 00:13:01.057 "nvme_admin": false, 00:13:01.057 "nvme_io": false 00:13:01.057 }, 00:13:01.057 "memory_domains": [ 00:13:01.057 { 00:13:01.057 "dma_device_id": "system", 00:13:01.057 "dma_device_type": 1 00:13:01.057 }, 00:13:01.057 { 00:13:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.057 "dma_device_type": 2 00:13:01.057 } 00:13:01.057 ], 00:13:01.057 "driver_specific": {} 00:13:01.057 } 00:13:01.057 ] 00:13:01.057 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:01.057 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:01.057 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:01.057 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:01.315 [2024-05-15 03:07:32.384707] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.315 [2024-05-15 03:07:32.384745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.315 [2024-05-15 03:07:32.384762] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.315 [2024-05-15 03:07:32.386167] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.315 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.573 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:01.573 "name": "Existed_Raid", 00:13:01.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.573 "strip_size_kb": 64, 00:13:01.573 "state": "configuring", 00:13:01.573 "raid_level": "concat", 00:13:01.573 "superblock": false, 00:13:01.573 "num_base_bdevs": 3, 00:13:01.573 "num_base_bdevs_discovered": 2, 00:13:01.573 "num_base_bdevs_operational": 3, 00:13:01.573 "base_bdevs_list": [ 00:13:01.573 { 00:13:01.573 "name": "BaseBdev1", 00:13:01.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.573 "is_configured": false, 00:13:01.573 "data_offset": 0, 00:13:01.573 "data_size": 0 00:13:01.573 }, 00:13:01.573 { 00:13:01.573 "name": "BaseBdev2", 00:13:01.573 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:01.573 "is_configured": true, 00:13:01.573 "data_offset": 0, 00:13:01.573 "data_size": 65536 00:13:01.573 }, 00:13:01.573 { 00:13:01.573 "name": "BaseBdev3", 00:13:01.573 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:01.573 "is_configured": true, 00:13:01.573 "data_offset": 0, 00:13:01.573 "data_size": 65536 00:13:01.573 } 00:13:01.573 ] 00:13:01.573 }' 00:13:01.573 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:01.573 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.139 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:02.397 [2024-05-15 03:07:33.515714] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.397 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.654 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.654 "name": "Existed_Raid", 00:13:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.654 "strip_size_kb": 64, 00:13:02.654 "state": "configuring", 00:13:02.654 "raid_level": "concat", 00:13:02.654 "superblock": false, 00:13:02.654 "num_base_bdevs": 3, 00:13:02.654 "num_base_bdevs_discovered": 1, 00:13:02.654 "num_base_bdevs_operational": 3, 00:13:02.654 "base_bdevs_list": [ 00:13:02.654 { 00:13:02.654 "name": "BaseBdev1", 00:13:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.654 "is_configured": false, 00:13:02.654 "data_offset": 0, 00:13:02.654 "data_size": 0 00:13:02.654 }, 00:13:02.654 { 00:13:02.654 "name": null, 00:13:02.654 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:02.654 "is_configured": false, 00:13:02.654 "data_offset": 0, 00:13:02.654 "data_size": 65536 00:13:02.654 }, 00:13:02.654 { 00:13:02.654 "name": "BaseBdev3", 00:13:02.654 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:02.654 "is_configured": true, 00:13:02.654 "data_offset": 0, 00:13:02.654 "data_size": 65536 00:13:02.654 } 00:13:02.654 ] 00:13:02.654 }' 00:13:02.655 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.655 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.588 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.588 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:03.588 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:13:03.588 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.847 [2024-05-15 03:07:34.926807] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.847 BaseBdev1 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:03.847 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.112 03:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.369 [ 00:13:04.369 { 00:13:04.369 "name": "BaseBdev1", 00:13:04.369 "aliases": [ 00:13:04.369 "23ca65ee-9deb-4a27-8f39-096a365acb7f" 00:13:04.369 ], 00:13:04.369 "product_name": "Malloc disk", 00:13:04.369 "block_size": 512, 00:13:04.369 "num_blocks": 65536, 00:13:04.369 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:04.369 "assigned_rate_limits": { 00:13:04.369 "rw_ios_per_sec": 0, 00:13:04.369 "rw_mbytes_per_sec": 0, 00:13:04.369 "r_mbytes_per_sec": 0, 00:13:04.369 "w_mbytes_per_sec": 0 00:13:04.369 }, 00:13:04.369 "claimed": true, 00:13:04.369 "claim_type": "exclusive_write", 00:13:04.369 "zoned": false, 00:13:04.369 "supported_io_types": { 00:13:04.369 "read": true, 00:13:04.369 "write": true, 00:13:04.369 "unmap": true, 00:13:04.369 "write_zeroes": true, 00:13:04.369 "flush": true, 00:13:04.369 "reset": true, 00:13:04.369 "compare": false, 00:13:04.369 "compare_and_write": false, 00:13:04.369 "abort": true, 00:13:04.369 "nvme_admin": false, 00:13:04.369 "nvme_io": false 00:13:04.369 }, 00:13:04.369 "memory_domains": [ 00:13:04.369 { 00:13:04.369 "dma_device_id": "system", 00:13:04.369 "dma_device_type": 1 00:13:04.369 }, 00:13:04.369 { 00:13:04.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.369 "dma_device_type": 2 00:13:04.369 } 00:13:04.369 ], 00:13:04.369 "driver_specific": {} 00:13:04.369 } 00:13:04.369 ] 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.369 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.627 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:04.627 "name": "Existed_Raid", 00:13:04.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.627 "strip_size_kb": 64, 00:13:04.627 "state": "configuring", 00:13:04.627 "raid_level": "concat", 00:13:04.627 "superblock": false, 00:13:04.627 "num_base_bdevs": 3, 00:13:04.627 "num_base_bdevs_discovered": 2, 00:13:04.627 "num_base_bdevs_operational": 3, 00:13:04.627 "base_bdevs_list": [ 00:13:04.627 { 00:13:04.627 "name": "BaseBdev1", 00:13:04.627 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:04.627 "is_configured": true, 00:13:04.627 "data_offset": 0, 00:13:04.627 "data_size": 65536 00:13:04.627 }, 00:13:04.627 { 00:13:04.627 "name": null, 00:13:04.627 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:04.627 "is_configured": false, 00:13:04.627 "data_offset": 0, 00:13:04.627 "data_size": 65536 00:13:04.627 }, 00:13:04.627 { 00:13:04.627 "name": "BaseBdev3", 00:13:04.627 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:04.627 "is_configured": true, 00:13:04.627 "data_offset": 0, 00:13:04.627 "data_size": 65536 00:13:04.627 } 00:13:04.627 ] 00:13:04.627 }' 00:13:04.627 03:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:04.627 03:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.193 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.193 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.451 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:05.451 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:05.709 [2024-05-15 03:07:36.807901] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.709 03:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.968 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:05.968 "name": "Existed_Raid", 00:13:05.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.968 "strip_size_kb": 64, 00:13:05.968 "state": "configuring", 00:13:05.968 "raid_level": "concat", 00:13:05.968 "superblock": false, 00:13:05.968 "num_base_bdevs": 3, 00:13:05.968 "num_base_bdevs_discovered": 1, 00:13:05.968 "num_base_bdevs_operational": 3, 00:13:05.968 "base_bdevs_list": [ 00:13:05.968 { 00:13:05.968 "name": "BaseBdev1", 00:13:05.968 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:05.968 "is_configured": true, 00:13:05.968 "data_offset": 0, 00:13:05.968 "data_size": 65536 00:13:05.968 }, 00:13:05.968 { 00:13:05.968 "name": null, 00:13:05.968 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:05.968 "is_configured": false, 00:13:05.968 "data_offset": 0, 00:13:05.968 "data_size": 65536 00:13:05.968 }, 00:13:05.968 { 00:13:05.968 "name": null, 00:13:05.968 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:05.968 "is_configured": false, 00:13:05.968 "data_offset": 0, 00:13:05.968 "data_size": 65536 00:13:05.968 } 00:13:05.968 ] 00:13:05.968 }' 00:13:05.968 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:05.968 03:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.534 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.535 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.793 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:13:06.793 03:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:07.051 [2024-05-15 03:07:38.171551] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.051 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.309 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:07.309 "name": "Existed_Raid", 00:13:07.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.309 "strip_size_kb": 64, 00:13:07.309 "state": "configuring", 00:13:07.309 "raid_level": "concat", 00:13:07.309 "superblock": false, 00:13:07.309 "num_base_bdevs": 3, 00:13:07.309 "num_base_bdevs_discovered": 2, 00:13:07.309 "num_base_bdevs_operational": 3, 00:13:07.309 "base_bdevs_list": [ 00:13:07.309 { 00:13:07.309 "name": "BaseBdev1", 00:13:07.309 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:07.309 "is_configured": true, 00:13:07.309 "data_offset": 0, 00:13:07.309 "data_size": 65536 00:13:07.309 }, 00:13:07.309 { 00:13:07.309 "name": null, 00:13:07.309 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:07.309 "is_configured": false, 00:13:07.309 "data_offset": 0, 00:13:07.309 "data_size": 65536 00:13:07.309 }, 00:13:07.309 { 00:13:07.309 "name": "BaseBdev3", 00:13:07.309 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:07.309 "is_configured": true, 00:13:07.309 "data_offset": 0, 00:13:07.309 "data_size": 65536 00:13:07.309 } 00:13:07.309 ] 00:13:07.309 }' 00:13:07.309 03:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:07.309 03:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.243 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.244 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.244 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:13:08.244 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:08.502 [2024-05-15 03:07:39.559274] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.502 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.760 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:08.760 "name": "Existed_Raid", 00:13:08.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.761 "strip_size_kb": 64, 00:13:08.761 "state": "configuring", 00:13:08.761 "raid_level": "concat", 00:13:08.761 "superblock": false, 00:13:08.761 "num_base_bdevs": 3, 00:13:08.761 "num_base_bdevs_discovered": 1, 00:13:08.761 "num_base_bdevs_operational": 3, 00:13:08.761 "base_bdevs_list": [ 00:13:08.761 { 00:13:08.761 "name": null, 00:13:08.761 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:08.761 "is_configured": false, 00:13:08.761 "data_offset": 0, 00:13:08.761 "data_size": 65536 00:13:08.761 }, 00:13:08.761 { 00:13:08.761 "name": null, 00:13:08.761 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:08.761 "is_configured": false, 00:13:08.761 "data_offset": 0, 00:13:08.761 "data_size": 65536 00:13:08.761 }, 00:13:08.761 { 00:13:08.761 "name": "BaseBdev3", 00:13:08.761 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:08.761 "is_configured": true, 00:13:08.761 "data_offset": 0, 00:13:08.761 "data_size": 65536 00:13:08.761 } 00:13:08.761 ] 00:13:08.761 }' 00:13:08.761 03:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:08.761 03:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.325 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.325 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:09.583 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:13:09.583 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:09.841 [2024-05-15 03:07:40.961408] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.841 03:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.099 03:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.099 "name": "Existed_Raid", 00:13:10.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.099 "strip_size_kb": 64, 00:13:10.099 "state": "configuring", 00:13:10.099 "raid_level": "concat", 00:13:10.099 "superblock": false, 00:13:10.099 "num_base_bdevs": 3, 00:13:10.099 "num_base_bdevs_discovered": 2, 00:13:10.099 "num_base_bdevs_operational": 3, 00:13:10.099 "base_bdevs_list": [ 00:13:10.099 { 00:13:10.099 "name": null, 00:13:10.099 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:10.099 "is_configured": false, 00:13:10.099 "data_offset": 0, 00:13:10.099 "data_size": 65536 00:13:10.099 }, 00:13:10.099 { 00:13:10.099 "name": "BaseBdev2", 00:13:10.099 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:10.099 "is_configured": true, 00:13:10.099 "data_offset": 0, 00:13:10.099 "data_size": 65536 00:13:10.099 }, 00:13:10.099 { 00:13:10.099 "name": "BaseBdev3", 00:13:10.099 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:10.099 "is_configured": true, 00:13:10.099 "data_offset": 0, 00:13:10.099 "data_size": 65536 00:13:10.099 } 00:13:10.099 ] 00:13:10.099 }' 00:13:10.099 03:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.099 03:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.033 03:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.033 03:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:11.033 03:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:13:11.033 03:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.033 03:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:11.291 03:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 23ca65ee-9deb-4a27-8f39-096a365acb7f 00:13:11.549 [2024-05-15 03:07:42.597103] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:11.549 [2024-05-15 03:07:42.597137] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e2b750 00:13:11.549 [2024-05-15 03:07:42.597144] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:11.549 [2024-05-15 03:07:42.597341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e2af80 00:13:11.549 [2024-05-15 03:07:42.597464] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e2b750 00:13:11.549 [2024-05-15 03:07:42.597472] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e2b750 00:13:11.549 [2024-05-15 03:07:42.597630] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.549 NewBaseBdev 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:11.549 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.807 03:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:12.065 [ 00:13:12.065 { 00:13:12.065 "name": "NewBaseBdev", 00:13:12.065 "aliases": [ 00:13:12.065 "23ca65ee-9deb-4a27-8f39-096a365acb7f" 00:13:12.065 ], 00:13:12.065 "product_name": "Malloc disk", 00:13:12.065 "block_size": 512, 00:13:12.065 "num_blocks": 65536, 00:13:12.065 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:12.065 "assigned_rate_limits": { 00:13:12.065 "rw_ios_per_sec": 0, 00:13:12.065 "rw_mbytes_per_sec": 0, 00:13:12.065 "r_mbytes_per_sec": 0, 00:13:12.065 "w_mbytes_per_sec": 0 00:13:12.065 }, 00:13:12.065 "claimed": true, 00:13:12.065 "claim_type": "exclusive_write", 00:13:12.065 "zoned": false, 00:13:12.065 "supported_io_types": { 00:13:12.065 "read": true, 00:13:12.065 "write": true, 00:13:12.065 "unmap": true, 00:13:12.065 "write_zeroes": true, 00:13:12.065 "flush": true, 00:13:12.065 "reset": true, 00:13:12.065 "compare": false, 00:13:12.065 "compare_and_write": false, 00:13:12.065 "abort": true, 00:13:12.065 "nvme_admin": false, 00:13:12.065 "nvme_io": false 00:13:12.065 }, 00:13:12.065 "memory_domains": [ 00:13:12.065 { 00:13:12.065 "dma_device_id": "system", 00:13:12.065 "dma_device_type": 1 00:13:12.065 }, 00:13:12.065 { 00:13:12.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.065 "dma_device_type": 2 00:13:12.065 } 00:13:12.065 ], 00:13:12.065 "driver_specific": {} 00:13:12.065 } 00:13:12.065 ] 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.065 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.323 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:12.323 "name": "Existed_Raid", 00:13:12.323 "uuid": "7dceaea8-93ba-48c2-aae9-7c24d72936eb", 00:13:12.323 "strip_size_kb": 64, 00:13:12.323 "state": "online", 00:13:12.323 "raid_level": "concat", 00:13:12.323 "superblock": false, 00:13:12.323 "num_base_bdevs": 3, 00:13:12.323 "num_base_bdevs_discovered": 3, 00:13:12.323 "num_base_bdevs_operational": 3, 00:13:12.323 "base_bdevs_list": [ 00:13:12.323 { 00:13:12.323 "name": "NewBaseBdev", 00:13:12.323 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:12.323 "is_configured": true, 00:13:12.323 "data_offset": 0, 00:13:12.323 "data_size": 65536 00:13:12.323 }, 00:13:12.323 { 00:13:12.323 "name": "BaseBdev2", 00:13:12.323 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:12.323 "is_configured": true, 00:13:12.323 "data_offset": 0, 00:13:12.323 "data_size": 65536 00:13:12.323 }, 00:13:12.323 { 00:13:12.323 "name": "BaseBdev3", 00:13:12.323 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:12.323 "is_configured": true, 00:13:12.323 "data_offset": 0, 00:13:12.323 "data_size": 65536 00:13:12.323 } 00:13:12.323 ] 00:13:12.323 }' 00:13:12.323 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:12.323 03:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:12.889 03:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:13.147 [2024-05-15 03:07:44.225755] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:13.147 "name": "Existed_Raid", 00:13:13.147 "aliases": [ 00:13:13.147 "7dceaea8-93ba-48c2-aae9-7c24d72936eb" 00:13:13.147 ], 00:13:13.147 "product_name": "Raid Volume", 00:13:13.147 "block_size": 512, 00:13:13.147 "num_blocks": 196608, 00:13:13.147 "uuid": "7dceaea8-93ba-48c2-aae9-7c24d72936eb", 00:13:13.147 "assigned_rate_limits": { 00:13:13.147 "rw_ios_per_sec": 0, 00:13:13.147 "rw_mbytes_per_sec": 0, 00:13:13.147 "r_mbytes_per_sec": 0, 00:13:13.147 "w_mbytes_per_sec": 0 00:13:13.147 }, 00:13:13.147 "claimed": false, 00:13:13.147 "zoned": false, 00:13:13.147 "supported_io_types": { 00:13:13.147 "read": true, 00:13:13.147 "write": true, 00:13:13.147 "unmap": true, 00:13:13.147 "write_zeroes": true, 00:13:13.147 "flush": true, 00:13:13.147 "reset": true, 00:13:13.147 "compare": false, 00:13:13.147 "compare_and_write": false, 00:13:13.147 "abort": false, 00:13:13.147 "nvme_admin": false, 00:13:13.147 "nvme_io": false 00:13:13.147 }, 00:13:13.147 "memory_domains": [ 00:13:13.147 { 00:13:13.147 "dma_device_id": "system", 00:13:13.147 "dma_device_type": 1 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.147 "dma_device_type": 2 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "dma_device_id": "system", 00:13:13.147 "dma_device_type": 1 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.147 "dma_device_type": 2 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "dma_device_id": "system", 00:13:13.147 "dma_device_type": 1 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.147 "dma_device_type": 2 00:13:13.147 } 00:13:13.147 ], 00:13:13.147 "driver_specific": { 00:13:13.147 "raid": { 00:13:13.147 "uuid": "7dceaea8-93ba-48c2-aae9-7c24d72936eb", 00:13:13.147 "strip_size_kb": 64, 00:13:13.147 "state": "online", 00:13:13.147 "raid_level": "concat", 00:13:13.147 "superblock": false, 00:13:13.147 "num_base_bdevs": 3, 00:13:13.147 "num_base_bdevs_discovered": 3, 00:13:13.147 "num_base_bdevs_operational": 3, 00:13:13.147 "base_bdevs_list": [ 00:13:13.147 { 00:13:13.147 "name": "NewBaseBdev", 00:13:13.147 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:13.147 "is_configured": true, 00:13:13.147 "data_offset": 0, 00:13:13.147 "data_size": 65536 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "name": "BaseBdev2", 00:13:13.147 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:13.147 "is_configured": true, 00:13:13.147 "data_offset": 0, 00:13:13.147 "data_size": 65536 00:13:13.147 }, 00:13:13.147 { 00:13:13.147 "name": "BaseBdev3", 00:13:13.147 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:13.147 "is_configured": true, 00:13:13.147 "data_offset": 0, 00:13:13.147 "data_size": 65536 00:13:13.147 } 00:13:13.147 ] 00:13:13.147 } 00:13:13.147 } 00:13:13.147 }' 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:13:13.147 BaseBdev2 00:13:13.147 BaseBdev3' 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:13.147 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:13.405 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:13.405 "name": "NewBaseBdev", 00:13:13.405 "aliases": [ 00:13:13.405 "23ca65ee-9deb-4a27-8f39-096a365acb7f" 00:13:13.405 ], 00:13:13.405 "product_name": "Malloc disk", 00:13:13.405 "block_size": 512, 00:13:13.405 "num_blocks": 65536, 00:13:13.405 "uuid": "23ca65ee-9deb-4a27-8f39-096a365acb7f", 00:13:13.405 "assigned_rate_limits": { 00:13:13.405 "rw_ios_per_sec": 0, 00:13:13.405 "rw_mbytes_per_sec": 0, 00:13:13.405 "r_mbytes_per_sec": 0, 00:13:13.405 "w_mbytes_per_sec": 0 00:13:13.405 }, 00:13:13.405 "claimed": true, 00:13:13.405 "claim_type": "exclusive_write", 00:13:13.405 "zoned": false, 00:13:13.405 "supported_io_types": { 00:13:13.405 "read": true, 00:13:13.405 "write": true, 00:13:13.405 "unmap": true, 00:13:13.405 "write_zeroes": true, 00:13:13.405 "flush": true, 00:13:13.405 "reset": true, 00:13:13.405 "compare": false, 00:13:13.405 "compare_and_write": false, 00:13:13.405 "abort": true, 00:13:13.405 "nvme_admin": false, 00:13:13.405 "nvme_io": false 00:13:13.405 }, 00:13:13.405 "memory_domains": [ 00:13:13.405 { 00:13:13.405 "dma_device_id": "system", 00:13:13.405 "dma_device_type": 1 00:13:13.405 }, 00:13:13.405 { 00:13:13.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.405 "dma_device_type": 2 00:13:13.405 } 00:13:13.405 ], 00:13:13.405 "driver_specific": {} 00:13:13.405 }' 00:13:13.405 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:13.663 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:13.921 03:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:14.178 "name": "BaseBdev2", 00:13:14.178 "aliases": [ 00:13:14.178 "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0" 00:13:14.178 ], 00:13:14.178 "product_name": "Malloc disk", 00:13:14.178 "block_size": 512, 00:13:14.178 "num_blocks": 65536, 00:13:14.178 "uuid": "0adbca8f-fc76-4e3d-965f-fdd00bb0ffa0", 00:13:14.178 "assigned_rate_limits": { 00:13:14.178 "rw_ios_per_sec": 0, 00:13:14.178 "rw_mbytes_per_sec": 0, 00:13:14.178 "r_mbytes_per_sec": 0, 00:13:14.178 "w_mbytes_per_sec": 0 00:13:14.178 }, 00:13:14.178 "claimed": true, 00:13:14.178 "claim_type": "exclusive_write", 00:13:14.178 "zoned": false, 00:13:14.178 "supported_io_types": { 00:13:14.178 "read": true, 00:13:14.178 "write": true, 00:13:14.178 "unmap": true, 00:13:14.178 "write_zeroes": true, 00:13:14.178 "flush": true, 00:13:14.178 "reset": true, 00:13:14.178 "compare": false, 00:13:14.178 "compare_and_write": false, 00:13:14.178 "abort": true, 00:13:14.178 "nvme_admin": false, 00:13:14.178 "nvme_io": false 00:13:14.178 }, 00:13:14.178 "memory_domains": [ 00:13:14.178 { 00:13:14.178 "dma_device_id": "system", 00:13:14.178 "dma_device_type": 1 00:13:14.178 }, 00:13:14.178 { 00:13:14.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.178 "dma_device_type": 2 00:13:14.178 } 00:13:14.178 ], 00:13:14.178 "driver_specific": {} 00:13:14.178 }' 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.178 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.435 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.435 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.435 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:14.436 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:14.693 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:14.693 "name": "BaseBdev3", 00:13:14.693 "aliases": [ 00:13:14.693 "aa6d281d-cd15-466b-850f-9c7a2b8fc13d" 00:13:14.693 ], 00:13:14.693 "product_name": "Malloc disk", 00:13:14.693 "block_size": 512, 00:13:14.693 "num_blocks": 65536, 00:13:14.693 "uuid": "aa6d281d-cd15-466b-850f-9c7a2b8fc13d", 00:13:14.693 "assigned_rate_limits": { 00:13:14.693 "rw_ios_per_sec": 0, 00:13:14.693 "rw_mbytes_per_sec": 0, 00:13:14.693 "r_mbytes_per_sec": 0, 00:13:14.693 "w_mbytes_per_sec": 0 00:13:14.693 }, 00:13:14.693 "claimed": true, 00:13:14.693 "claim_type": "exclusive_write", 00:13:14.693 "zoned": false, 00:13:14.693 "supported_io_types": { 00:13:14.693 "read": true, 00:13:14.693 "write": true, 00:13:14.693 "unmap": true, 00:13:14.693 "write_zeroes": true, 00:13:14.693 "flush": true, 00:13:14.693 "reset": true, 00:13:14.693 "compare": false, 00:13:14.693 "compare_and_write": false, 00:13:14.693 "abort": true, 00:13:14.693 "nvme_admin": false, 00:13:14.693 "nvme_io": false 00:13:14.693 }, 00:13:14.693 "memory_domains": [ 00:13:14.693 { 00:13:14.693 "dma_device_id": "system", 00:13:14.693 "dma_device_type": 1 00:13:14.693 }, 00:13:14.693 { 00:13:14.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.693 "dma_device_type": 2 00:13:14.693 } 00:13:14.693 ], 00:13:14.693 "driver_specific": {} 00:13:14.693 }' 00:13:14.693 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.693 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.951 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:14.951 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.951 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.951 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.951 03:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.951 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.951 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.951 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:15.207 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:15.207 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:15.207 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:15.464 [2024-05-15 03:07:46.391309] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.464 [2024-05-15 03:07:46.391333] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.464 [2024-05-15 03:07:46.391381] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.464 [2024-05-15 03:07:46.391432] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.464 [2024-05-15 03:07:46.391442] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e2b750 name Existed_Raid, state offline 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4073715 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4073715 ']' 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4073715 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4073715 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4073715' 00:13:15.464 killing process with pid 4073715 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4073715 00:13:15.464 [2024-05-15 03:07:46.459013] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.464 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4073715 00:13:15.464 [2024-05-15 03:07:46.483809] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:13:15.720 00:13:15.720 real 0m28.721s 00:13:15.720 user 0m54.181s 00:13:15.720 sys 0m4.105s 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.720 ************************************ 00:13:15.720 END TEST raid_state_function_test 00:13:15.720 ************************************ 00:13:15.720 03:07:46 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:15.720 03:07:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:15.720 03:07:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.720 03:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.720 ************************************ 00:13:15.720 START TEST raid_state_function_test_sb 00:13:15.720 ************************************ 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4079072 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4079072' 00:13:15.720 Process raid pid: 4079072 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4079072 /var/tmp/spdk-raid.sock 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4079072 ']' 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:15.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:15.720 03:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.720 [2024-05-15 03:07:46.844545] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:15.720 [2024-05-15 03:07:46.844595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.977 [2024-05-15 03:07:46.941742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.977 [2024-05-15 03:07:47.034862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.977 [2024-05-15 03:07:47.096045] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.977 [2024-05-15 03:07:47.096078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.969 03:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:16.969 03:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:16.970 03:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:16.970 [2024-05-15 03:07:48.022924] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.970 [2024-05-15 03:07:48.022965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.970 [2024-05-15 03:07:48.022975] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.970 [2024-05-15 03:07:48.022984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.970 [2024-05-15 03:07:48.022990] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.970 [2024-05-15 03:07:48.022999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.970 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.228 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.228 "name": "Existed_Raid", 00:13:17.228 "uuid": "503542ea-a57e-4a93-a201-533192816c29", 00:13:17.228 "strip_size_kb": 64, 00:13:17.228 "state": "configuring", 00:13:17.228 "raid_level": "concat", 00:13:17.228 "superblock": true, 00:13:17.228 "num_base_bdevs": 3, 00:13:17.228 "num_base_bdevs_discovered": 0, 00:13:17.228 "num_base_bdevs_operational": 3, 00:13:17.228 "base_bdevs_list": [ 00:13:17.228 { 00:13:17.228 "name": "BaseBdev1", 00:13:17.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.228 "is_configured": false, 00:13:17.228 "data_offset": 0, 00:13:17.228 "data_size": 0 00:13:17.228 }, 00:13:17.228 { 00:13:17.228 "name": "BaseBdev2", 00:13:17.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.228 "is_configured": false, 00:13:17.228 "data_offset": 0, 00:13:17.228 "data_size": 0 00:13:17.228 }, 00:13:17.228 { 00:13:17.228 "name": "BaseBdev3", 00:13:17.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.228 "is_configured": false, 00:13:17.228 "data_offset": 0, 00:13:17.228 "data_size": 0 00:13:17.228 } 00:13:17.228 ] 00:13:17.228 }' 00:13:17.228 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.228 03:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.794 03:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:18.052 [2024-05-15 03:07:49.129711] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.052 [2024-05-15 03:07:49.129738] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x245cde0 name Existed_Raid, state configuring 00:13:18.052 03:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:18.311 [2024-05-15 03:07:49.382413] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.311 [2024-05-15 03:07:49.382440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.311 [2024-05-15 03:07:49.382448] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.311 [2024-05-15 03:07:49.382457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.311 [2024-05-15 03:07:49.382464] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.311 [2024-05-15 03:07:49.382472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.311 03:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.570 [2024-05-15 03:07:49.640602] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.570 BaseBdev1 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:18.570 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.828 03:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.086 [ 00:13:19.086 { 00:13:19.086 "name": "BaseBdev1", 00:13:19.086 "aliases": [ 00:13:19.086 "d0737e86-652d-4b70-93df-3b5420ba89f2" 00:13:19.086 ], 00:13:19.086 "product_name": "Malloc disk", 00:13:19.086 "block_size": 512, 00:13:19.086 "num_blocks": 65536, 00:13:19.086 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:19.086 "assigned_rate_limits": { 00:13:19.086 "rw_ios_per_sec": 0, 00:13:19.086 "rw_mbytes_per_sec": 0, 00:13:19.086 "r_mbytes_per_sec": 0, 00:13:19.086 "w_mbytes_per_sec": 0 00:13:19.086 }, 00:13:19.086 "claimed": true, 00:13:19.086 "claim_type": "exclusive_write", 00:13:19.086 "zoned": false, 00:13:19.086 "supported_io_types": { 00:13:19.086 "read": true, 00:13:19.086 "write": true, 00:13:19.086 "unmap": true, 00:13:19.086 "write_zeroes": true, 00:13:19.086 "flush": true, 00:13:19.086 "reset": true, 00:13:19.086 "compare": false, 00:13:19.086 "compare_and_write": false, 00:13:19.086 "abort": true, 00:13:19.086 "nvme_admin": false, 00:13:19.086 "nvme_io": false 00:13:19.086 }, 00:13:19.086 "memory_domains": [ 00:13:19.086 { 00:13:19.086 "dma_device_id": "system", 00:13:19.086 "dma_device_type": 1 00:13:19.086 }, 00:13:19.086 { 00:13:19.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.086 "dma_device_type": 2 00:13:19.086 } 00:13:19.086 ], 00:13:19.086 "driver_specific": {} 00:13:19.086 } 00:13:19.086 ] 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.086 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.344 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:19.344 "name": "Existed_Raid", 00:13:19.344 "uuid": "59d0bc32-68ee-4644-9fa1-ee7d1c6d83b2", 00:13:19.344 "strip_size_kb": 64, 00:13:19.344 "state": "configuring", 00:13:19.344 "raid_level": "concat", 00:13:19.344 "superblock": true, 00:13:19.344 "num_base_bdevs": 3, 00:13:19.344 "num_base_bdevs_discovered": 1, 00:13:19.344 "num_base_bdevs_operational": 3, 00:13:19.344 "base_bdevs_list": [ 00:13:19.344 { 00:13:19.344 "name": "BaseBdev1", 00:13:19.344 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:19.344 "is_configured": true, 00:13:19.344 "data_offset": 2048, 00:13:19.344 "data_size": 63488 00:13:19.344 }, 00:13:19.344 { 00:13:19.344 "name": "BaseBdev2", 00:13:19.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.344 "is_configured": false, 00:13:19.344 "data_offset": 0, 00:13:19.344 "data_size": 0 00:13:19.344 }, 00:13:19.344 { 00:13:19.344 "name": "BaseBdev3", 00:13:19.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.344 "is_configured": false, 00:13:19.344 "data_offset": 0, 00:13:19.344 "data_size": 0 00:13:19.344 } 00:13:19.344 ] 00:13:19.344 }' 00:13:19.344 03:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:19.344 03:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.909 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:20.167 [2024-05-15 03:07:51.276971] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.167 [2024-05-15 03:07:51.277007] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x245c6b0 name Existed_Raid, state configuring 00:13:20.167 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:20.425 [2024-05-15 03:07:51.533687] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.425 [2024-05-15 03:07:51.535274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.425 [2024-05-15 03:07:51.535304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.425 [2024-05-15 03:07:51.535312] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.425 [2024-05-15 03:07:51.535321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.425 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.684 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.684 "name": "Existed_Raid", 00:13:20.684 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:20.684 "strip_size_kb": 64, 00:13:20.684 "state": "configuring", 00:13:20.684 "raid_level": "concat", 00:13:20.684 "superblock": true, 00:13:20.684 "num_base_bdevs": 3, 00:13:20.684 "num_base_bdevs_discovered": 1, 00:13:20.684 "num_base_bdevs_operational": 3, 00:13:20.684 "base_bdevs_list": [ 00:13:20.684 { 00:13:20.684 "name": "BaseBdev1", 00:13:20.684 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:20.684 "is_configured": true, 00:13:20.684 "data_offset": 2048, 00:13:20.684 "data_size": 63488 00:13:20.684 }, 00:13:20.684 { 00:13:20.684 "name": "BaseBdev2", 00:13:20.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.684 "is_configured": false, 00:13:20.684 "data_offset": 0, 00:13:20.684 "data_size": 0 00:13:20.684 }, 00:13:20.684 { 00:13:20.684 "name": "BaseBdev3", 00:13:20.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.684 "is_configured": false, 00:13:20.684 "data_offset": 0, 00:13:20.684 "data_size": 0 00:13:20.684 } 00:13:20.684 ] 00:13:20.684 }' 00:13:20.684 03:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.684 03:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.617 [2024-05-15 03:07:52.671979] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.617 BaseBdev2 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:21.617 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.875 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.133 [ 00:13:22.133 { 00:13:22.133 "name": "BaseBdev2", 00:13:22.133 "aliases": [ 00:13:22.133 "6ee4998d-9446-4a4e-aa7a-16e58efce491" 00:13:22.133 ], 00:13:22.133 "product_name": "Malloc disk", 00:13:22.133 "block_size": 512, 00:13:22.134 "num_blocks": 65536, 00:13:22.134 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:22.134 "assigned_rate_limits": { 00:13:22.134 "rw_ios_per_sec": 0, 00:13:22.134 "rw_mbytes_per_sec": 0, 00:13:22.134 "r_mbytes_per_sec": 0, 00:13:22.134 "w_mbytes_per_sec": 0 00:13:22.134 }, 00:13:22.134 "claimed": true, 00:13:22.134 "claim_type": "exclusive_write", 00:13:22.134 "zoned": false, 00:13:22.134 "supported_io_types": { 00:13:22.134 "read": true, 00:13:22.134 "write": true, 00:13:22.134 "unmap": true, 00:13:22.134 "write_zeroes": true, 00:13:22.134 "flush": true, 00:13:22.134 "reset": true, 00:13:22.134 "compare": false, 00:13:22.134 "compare_and_write": false, 00:13:22.134 "abort": true, 00:13:22.134 "nvme_admin": false, 00:13:22.134 "nvme_io": false 00:13:22.134 }, 00:13:22.134 "memory_domains": [ 00:13:22.134 { 00:13:22.134 "dma_device_id": "system", 00:13:22.134 "dma_device_type": 1 00:13:22.134 }, 00:13:22.134 { 00:13:22.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.134 "dma_device_type": 2 00:13:22.134 } 00:13:22.134 ], 00:13:22.134 "driver_specific": {} 00:13:22.134 } 00:13:22.134 ] 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.134 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.392 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.392 "name": "Existed_Raid", 00:13:22.392 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:22.392 "strip_size_kb": 64, 00:13:22.392 "state": "configuring", 00:13:22.392 "raid_level": "concat", 00:13:22.392 "superblock": true, 00:13:22.392 "num_base_bdevs": 3, 00:13:22.392 "num_base_bdevs_discovered": 2, 00:13:22.392 "num_base_bdevs_operational": 3, 00:13:22.392 "base_bdevs_list": [ 00:13:22.392 { 00:13:22.392 "name": "BaseBdev1", 00:13:22.392 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:22.392 "is_configured": true, 00:13:22.392 "data_offset": 2048, 00:13:22.392 "data_size": 63488 00:13:22.392 }, 00:13:22.392 { 00:13:22.392 "name": "BaseBdev2", 00:13:22.392 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:22.392 "is_configured": true, 00:13:22.392 "data_offset": 2048, 00:13:22.392 "data_size": 63488 00:13:22.392 }, 00:13:22.392 { 00:13:22.392 "name": "BaseBdev3", 00:13:22.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.392 "is_configured": false, 00:13:22.392 "data_offset": 0, 00:13:22.392 "data_size": 0 00:13:22.392 } 00:13:22.392 ] 00:13:22.392 }' 00:13:22.392 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.392 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.957 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.215 [2024-05-15 03:07:54.287507] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.215 [2024-05-15 03:07:54.287659] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x245d760 00:13:23.215 [2024-05-15 03:07:54.287676] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:23.215 [2024-05-15 03:07:54.287873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2474690 00:13:23.215 [2024-05-15 03:07:54.288003] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x245d760 00:13:23.215 [2024-05-15 03:07:54.288012] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x245d760 00:13:23.215 [2024-05-15 03:07:54.288110] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.215 BaseBdev3 00:13:23.215 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:13:23.215 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:23.216 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:23.216 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:23.216 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:23.216 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:23.216 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.473 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.731 [ 00:13:23.731 { 00:13:23.731 "name": "BaseBdev3", 00:13:23.731 "aliases": [ 00:13:23.731 "f55f0fe8-c68c-4c56-ae12-42536b2614ea" 00:13:23.731 ], 00:13:23.731 "product_name": "Malloc disk", 00:13:23.731 "block_size": 512, 00:13:23.731 "num_blocks": 65536, 00:13:23.731 "uuid": "f55f0fe8-c68c-4c56-ae12-42536b2614ea", 00:13:23.731 "assigned_rate_limits": { 00:13:23.731 "rw_ios_per_sec": 0, 00:13:23.731 "rw_mbytes_per_sec": 0, 00:13:23.731 "r_mbytes_per_sec": 0, 00:13:23.731 "w_mbytes_per_sec": 0 00:13:23.731 }, 00:13:23.731 "claimed": true, 00:13:23.731 "claim_type": "exclusive_write", 00:13:23.731 "zoned": false, 00:13:23.731 "supported_io_types": { 00:13:23.731 "read": true, 00:13:23.731 "write": true, 00:13:23.731 "unmap": true, 00:13:23.731 "write_zeroes": true, 00:13:23.731 "flush": true, 00:13:23.731 "reset": true, 00:13:23.731 "compare": false, 00:13:23.731 "compare_and_write": false, 00:13:23.731 "abort": true, 00:13:23.731 "nvme_admin": false, 00:13:23.731 "nvme_io": false 00:13:23.732 }, 00:13:23.732 "memory_domains": [ 00:13:23.732 { 00:13:23.732 "dma_device_id": "system", 00:13:23.732 "dma_device_type": 1 00:13:23.732 }, 00:13:23.732 { 00:13:23.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.732 "dma_device_type": 2 00:13:23.732 } 00:13:23.732 ], 00:13:23.732 "driver_specific": {} 00:13:23.732 } 00:13:23.732 ] 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.732 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.990 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.990 "name": "Existed_Raid", 00:13:23.990 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:23.990 "strip_size_kb": 64, 00:13:23.990 "state": "online", 00:13:23.990 "raid_level": "concat", 00:13:23.990 "superblock": true, 00:13:23.990 "num_base_bdevs": 3, 00:13:23.990 "num_base_bdevs_discovered": 3, 00:13:23.990 "num_base_bdevs_operational": 3, 00:13:23.990 "base_bdevs_list": [ 00:13:23.990 { 00:13:23.990 "name": "BaseBdev1", 00:13:23.990 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:23.990 "is_configured": true, 00:13:23.990 "data_offset": 2048, 00:13:23.990 "data_size": 63488 00:13:23.990 }, 00:13:23.990 { 00:13:23.990 "name": "BaseBdev2", 00:13:23.990 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:23.990 "is_configured": true, 00:13:23.990 "data_offset": 2048, 00:13:23.990 "data_size": 63488 00:13:23.990 }, 00:13:23.990 { 00:13:23.990 "name": "BaseBdev3", 00:13:23.990 "uuid": "f55f0fe8-c68c-4c56-ae12-42536b2614ea", 00:13:23.990 "is_configured": true, 00:13:23.990 "data_offset": 2048, 00:13:23.990 "data_size": 63488 00:13:23.990 } 00:13:23.990 ] 00:13:23.990 }' 00:13:23.990 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.990 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:24.556 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:24.814 [2024-05-15 03:07:55.924190] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.814 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:24.814 "name": "Existed_Raid", 00:13:24.814 "aliases": [ 00:13:24.814 "8b0497bd-27f0-46ab-90d9-5ae564abe0f8" 00:13:24.814 ], 00:13:24.814 "product_name": "Raid Volume", 00:13:24.814 "block_size": 512, 00:13:24.814 "num_blocks": 190464, 00:13:24.814 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:24.814 "assigned_rate_limits": { 00:13:24.814 "rw_ios_per_sec": 0, 00:13:24.814 "rw_mbytes_per_sec": 0, 00:13:24.814 "r_mbytes_per_sec": 0, 00:13:24.814 "w_mbytes_per_sec": 0 00:13:24.814 }, 00:13:24.814 "claimed": false, 00:13:24.814 "zoned": false, 00:13:24.814 "supported_io_types": { 00:13:24.814 "read": true, 00:13:24.814 "write": true, 00:13:24.814 "unmap": true, 00:13:24.814 "write_zeroes": true, 00:13:24.814 "flush": true, 00:13:24.814 "reset": true, 00:13:24.814 "compare": false, 00:13:24.814 "compare_and_write": false, 00:13:24.814 "abort": false, 00:13:24.814 "nvme_admin": false, 00:13:24.814 "nvme_io": false 00:13:24.814 }, 00:13:24.814 "memory_domains": [ 00:13:24.814 { 00:13:24.814 "dma_device_id": "system", 00:13:24.814 "dma_device_type": 1 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.814 "dma_device_type": 2 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "dma_device_id": "system", 00:13:24.814 "dma_device_type": 1 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.814 "dma_device_type": 2 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "dma_device_id": "system", 00:13:24.814 "dma_device_type": 1 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.814 "dma_device_type": 2 00:13:24.814 } 00:13:24.814 ], 00:13:24.814 "driver_specific": { 00:13:24.814 "raid": { 00:13:24.814 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:24.814 "strip_size_kb": 64, 00:13:24.814 "state": "online", 00:13:24.814 "raid_level": "concat", 00:13:24.814 "superblock": true, 00:13:24.814 "num_base_bdevs": 3, 00:13:24.814 "num_base_bdevs_discovered": 3, 00:13:24.814 "num_base_bdevs_operational": 3, 00:13:24.814 "base_bdevs_list": [ 00:13:24.814 { 00:13:24.814 "name": "BaseBdev1", 00:13:24.814 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "name": "BaseBdev2", 00:13:24.814 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "name": "BaseBdev3", 00:13:24.814 "uuid": "f55f0fe8-c68c-4c56-ae12-42536b2614ea", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 } 00:13:24.814 ] 00:13:24.814 } 00:13:24.814 } 00:13:24.814 }' 00:13:24.814 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.073 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:25.073 BaseBdev2 00:13:25.073 BaseBdev3' 00:13:25.073 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:25.073 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:25.073 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:25.331 "name": "BaseBdev1", 00:13:25.331 "aliases": [ 00:13:25.331 "d0737e86-652d-4b70-93df-3b5420ba89f2" 00:13:25.331 ], 00:13:25.331 "product_name": "Malloc disk", 00:13:25.331 "block_size": 512, 00:13:25.331 "num_blocks": 65536, 00:13:25.331 "uuid": "d0737e86-652d-4b70-93df-3b5420ba89f2", 00:13:25.331 "assigned_rate_limits": { 00:13:25.331 "rw_ios_per_sec": 0, 00:13:25.331 "rw_mbytes_per_sec": 0, 00:13:25.331 "r_mbytes_per_sec": 0, 00:13:25.331 "w_mbytes_per_sec": 0 00:13:25.331 }, 00:13:25.331 "claimed": true, 00:13:25.331 "claim_type": "exclusive_write", 00:13:25.331 "zoned": false, 00:13:25.331 "supported_io_types": { 00:13:25.331 "read": true, 00:13:25.331 "write": true, 00:13:25.331 "unmap": true, 00:13:25.331 "write_zeroes": true, 00:13:25.331 "flush": true, 00:13:25.331 "reset": true, 00:13:25.331 "compare": false, 00:13:25.331 "compare_and_write": false, 00:13:25.331 "abort": true, 00:13:25.331 "nvme_admin": false, 00:13:25.331 "nvme_io": false 00:13:25.331 }, 00:13:25.331 "memory_domains": [ 00:13:25.331 { 00:13:25.331 "dma_device_id": "system", 00:13:25.331 "dma_device_type": 1 00:13:25.331 }, 00:13:25.331 { 00:13:25.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.331 "dma_device_type": 2 00:13:25.331 } 00:13:25.331 ], 00:13:25.331 "driver_specific": {} 00:13:25.331 }' 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:25.331 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:25.590 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:25.849 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:25.849 "name": "BaseBdev2", 00:13:25.849 "aliases": [ 00:13:25.849 "6ee4998d-9446-4a4e-aa7a-16e58efce491" 00:13:25.849 ], 00:13:25.849 "product_name": "Malloc disk", 00:13:25.849 "block_size": 512, 00:13:25.849 "num_blocks": 65536, 00:13:25.849 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:25.849 "assigned_rate_limits": { 00:13:25.849 "rw_ios_per_sec": 0, 00:13:25.849 "rw_mbytes_per_sec": 0, 00:13:25.849 "r_mbytes_per_sec": 0, 00:13:25.849 "w_mbytes_per_sec": 0 00:13:25.849 }, 00:13:25.849 "claimed": true, 00:13:25.849 "claim_type": "exclusive_write", 00:13:25.849 "zoned": false, 00:13:25.849 "supported_io_types": { 00:13:25.849 "read": true, 00:13:25.849 "write": true, 00:13:25.849 "unmap": true, 00:13:25.849 "write_zeroes": true, 00:13:25.849 "flush": true, 00:13:25.849 "reset": true, 00:13:25.849 "compare": false, 00:13:25.849 "compare_and_write": false, 00:13:25.849 "abort": true, 00:13:25.849 "nvme_admin": false, 00:13:25.849 "nvme_io": false 00:13:25.849 }, 00:13:25.849 "memory_domains": [ 00:13:25.849 { 00:13:25.849 "dma_device_id": "system", 00:13:25.849 "dma_device_type": 1 00:13:25.849 }, 00:13:25.849 { 00:13:25.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.849 "dma_device_type": 2 00:13:25.849 } 00:13:25.849 ], 00:13:25.849 "driver_specific": {} 00:13:25.849 }' 00:13:25.849 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:25.849 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:25.849 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:25.849 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:26.107 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:26.366 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:26.366 "name": "BaseBdev3", 00:13:26.366 "aliases": [ 00:13:26.366 "f55f0fe8-c68c-4c56-ae12-42536b2614ea" 00:13:26.366 ], 00:13:26.366 "product_name": "Malloc disk", 00:13:26.366 "block_size": 512, 00:13:26.366 "num_blocks": 65536, 00:13:26.366 "uuid": "f55f0fe8-c68c-4c56-ae12-42536b2614ea", 00:13:26.366 "assigned_rate_limits": { 00:13:26.366 "rw_ios_per_sec": 0, 00:13:26.366 "rw_mbytes_per_sec": 0, 00:13:26.366 "r_mbytes_per_sec": 0, 00:13:26.366 "w_mbytes_per_sec": 0 00:13:26.366 }, 00:13:26.366 "claimed": true, 00:13:26.366 "claim_type": "exclusive_write", 00:13:26.366 "zoned": false, 00:13:26.366 "supported_io_types": { 00:13:26.366 "read": true, 00:13:26.366 "write": true, 00:13:26.366 "unmap": true, 00:13:26.366 "write_zeroes": true, 00:13:26.366 "flush": true, 00:13:26.366 "reset": true, 00:13:26.366 "compare": false, 00:13:26.366 "compare_and_write": false, 00:13:26.366 "abort": true, 00:13:26.366 "nvme_admin": false, 00:13:26.366 "nvme_io": false 00:13:26.366 }, 00:13:26.366 "memory_domains": [ 00:13:26.366 { 00:13:26.366 "dma_device_id": "system", 00:13:26.366 "dma_device_type": 1 00:13:26.366 }, 00:13:26.366 { 00:13:26.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.366 "dma_device_type": 2 00:13:26.366 } 00:13:26.366 ], 00:13:26.366 "driver_specific": {} 00:13:26.366 }' 00:13:26.366 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:26.624 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:26.883 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:26.883 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:26.883 03:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:27.143 [2024-05-15 03:07:58.081743] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.143 [2024-05-15 03:07:58.081768] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.143 [2024-05-15 03:07:58.081806] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.143 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.402 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.402 "name": "Existed_Raid", 00:13:27.402 "uuid": "8b0497bd-27f0-46ab-90d9-5ae564abe0f8", 00:13:27.402 "strip_size_kb": 64, 00:13:27.402 "state": "offline", 00:13:27.402 "raid_level": "concat", 00:13:27.402 "superblock": true, 00:13:27.402 "num_base_bdevs": 3, 00:13:27.402 "num_base_bdevs_discovered": 2, 00:13:27.402 "num_base_bdevs_operational": 2, 00:13:27.402 "base_bdevs_list": [ 00:13:27.402 { 00:13:27.402 "name": null, 00:13:27.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.402 "is_configured": false, 00:13:27.402 "data_offset": 2048, 00:13:27.402 "data_size": 63488 00:13:27.402 }, 00:13:27.402 { 00:13:27.402 "name": "BaseBdev2", 00:13:27.402 "uuid": "6ee4998d-9446-4a4e-aa7a-16e58efce491", 00:13:27.402 "is_configured": true, 00:13:27.402 "data_offset": 2048, 00:13:27.402 "data_size": 63488 00:13:27.402 }, 00:13:27.402 { 00:13:27.402 "name": "BaseBdev3", 00:13:27.402 "uuid": "f55f0fe8-c68c-4c56-ae12-42536b2614ea", 00:13:27.402 "is_configured": true, 00:13:27.402 "data_offset": 2048, 00:13:27.402 "data_size": 63488 00:13:27.402 } 00:13:27.402 ] 00:13:27.402 }' 00:13:27.402 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.402 03:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.969 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:27.969 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.969 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:27.969 03:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.226 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:28.226 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.226 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:28.483 [2024-05-15 03:07:59.490702] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.483 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:28.483 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:28.483 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.483 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:28.741 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:28.741 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.741 03:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:28.999 [2024-05-15 03:08:00.002501] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:28.999 [2024-05-15 03:08:00.002541] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x245d760 name Existed_Raid, state offline 00:13:28.999 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:28.999 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:28.999 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.999 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:29.257 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.514 BaseBdev2 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:29.514 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:29.772 03:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.030 [ 00:13:30.030 { 00:13:30.030 "name": "BaseBdev2", 00:13:30.030 "aliases": [ 00:13:30.030 "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa" 00:13:30.030 ], 00:13:30.030 "product_name": "Malloc disk", 00:13:30.030 "block_size": 512, 00:13:30.030 "num_blocks": 65536, 00:13:30.030 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:30.030 "assigned_rate_limits": { 00:13:30.030 "rw_ios_per_sec": 0, 00:13:30.030 "rw_mbytes_per_sec": 0, 00:13:30.030 "r_mbytes_per_sec": 0, 00:13:30.030 "w_mbytes_per_sec": 0 00:13:30.030 }, 00:13:30.030 "claimed": false, 00:13:30.030 "zoned": false, 00:13:30.030 "supported_io_types": { 00:13:30.030 "read": true, 00:13:30.030 "write": true, 00:13:30.030 "unmap": true, 00:13:30.030 "write_zeroes": true, 00:13:30.030 "flush": true, 00:13:30.030 "reset": true, 00:13:30.030 "compare": false, 00:13:30.030 "compare_and_write": false, 00:13:30.030 "abort": true, 00:13:30.030 "nvme_admin": false, 00:13:30.030 "nvme_io": false 00:13:30.030 }, 00:13:30.030 "memory_domains": [ 00:13:30.030 { 00:13:30.030 "dma_device_id": "system", 00:13:30.030 "dma_device_type": 1 00:13:30.030 }, 00:13:30.030 { 00:13:30.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.030 "dma_device_type": 2 00:13:30.030 } 00:13:30.030 ], 00:13:30.030 "driver_specific": {} 00:13:30.030 } 00:13:30.030 ] 00:13:30.030 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:30.030 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:30.030 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:30.030 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.288 BaseBdev3 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:30.288 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.545 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.802 [ 00:13:30.802 { 00:13:30.802 "name": "BaseBdev3", 00:13:30.802 "aliases": [ 00:13:30.802 "4006a284-c9f8-4b55-bb97-22fc4732d6c0" 00:13:30.802 ], 00:13:30.802 "product_name": "Malloc disk", 00:13:30.802 "block_size": 512, 00:13:30.802 "num_blocks": 65536, 00:13:30.802 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:30.802 "assigned_rate_limits": { 00:13:30.802 "rw_ios_per_sec": 0, 00:13:30.802 "rw_mbytes_per_sec": 0, 00:13:30.802 "r_mbytes_per_sec": 0, 00:13:30.802 "w_mbytes_per_sec": 0 00:13:30.802 }, 00:13:30.802 "claimed": false, 00:13:30.802 "zoned": false, 00:13:30.802 "supported_io_types": { 00:13:30.802 "read": true, 00:13:30.802 "write": true, 00:13:30.802 "unmap": true, 00:13:30.802 "write_zeroes": true, 00:13:30.802 "flush": true, 00:13:30.802 "reset": true, 00:13:30.802 "compare": false, 00:13:30.802 "compare_and_write": false, 00:13:30.802 "abort": true, 00:13:30.802 "nvme_admin": false, 00:13:30.802 "nvme_io": false 00:13:30.802 }, 00:13:30.802 "memory_domains": [ 00:13:30.802 { 00:13:30.802 "dma_device_id": "system", 00:13:30.802 "dma_device_type": 1 00:13:30.802 }, 00:13:30.802 { 00:13:30.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.802 "dma_device_type": 2 00:13:30.802 } 00:13:30.802 ], 00:13:30.802 "driver_specific": {} 00:13:30.802 } 00:13:30.802 ] 00:13:30.802 03:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:30.802 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:30.802 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:30.802 03:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:31.060 [2024-05-15 03:08:02.055283] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.060 [2024-05-15 03:08:02.055319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.060 [2024-05-15 03:08:02.055337] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.060 [2024-05-15 03:08:02.056730] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:31.060 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:31.061 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.061 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.319 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:31.319 "name": "Existed_Raid", 00:13:31.319 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:31.319 "strip_size_kb": 64, 00:13:31.319 "state": "configuring", 00:13:31.319 "raid_level": "concat", 00:13:31.319 "superblock": true, 00:13:31.319 "num_base_bdevs": 3, 00:13:31.319 "num_base_bdevs_discovered": 2, 00:13:31.319 "num_base_bdevs_operational": 3, 00:13:31.319 "base_bdevs_list": [ 00:13:31.319 { 00:13:31.319 "name": "BaseBdev1", 00:13:31.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.319 "is_configured": false, 00:13:31.319 "data_offset": 0, 00:13:31.319 "data_size": 0 00:13:31.319 }, 00:13:31.319 { 00:13:31.319 "name": "BaseBdev2", 00:13:31.319 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:31.319 "is_configured": true, 00:13:31.319 "data_offset": 2048, 00:13:31.319 "data_size": 63488 00:13:31.319 }, 00:13:31.319 { 00:13:31.319 "name": "BaseBdev3", 00:13:31.319 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:31.319 "is_configured": true, 00:13:31.319 "data_offset": 2048, 00:13:31.319 "data_size": 63488 00:13:31.319 } 00:13:31.319 ] 00:13:31.319 }' 00:13:31.319 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:31.319 03:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.884 03:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:32.143 [2024-05-15 03:08:03.210350] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.143 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.144 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.144 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.144 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.402 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.402 "name": "Existed_Raid", 00:13:32.402 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:32.402 "strip_size_kb": 64, 00:13:32.402 "state": "configuring", 00:13:32.402 "raid_level": "concat", 00:13:32.402 "superblock": true, 00:13:32.402 "num_base_bdevs": 3, 00:13:32.402 "num_base_bdevs_discovered": 1, 00:13:32.402 "num_base_bdevs_operational": 3, 00:13:32.402 "base_bdevs_list": [ 00:13:32.402 { 00:13:32.402 "name": "BaseBdev1", 00:13:32.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.402 "is_configured": false, 00:13:32.402 "data_offset": 0, 00:13:32.402 "data_size": 0 00:13:32.402 }, 00:13:32.402 { 00:13:32.402 "name": null, 00:13:32.402 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:32.402 "is_configured": false, 00:13:32.402 "data_offset": 2048, 00:13:32.402 "data_size": 63488 00:13:32.402 }, 00:13:32.402 { 00:13:32.402 "name": "BaseBdev3", 00:13:32.402 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:32.402 "is_configured": true, 00:13:32.402 "data_offset": 2048, 00:13:32.402 "data_size": 63488 00:13:32.402 } 00:13:32.402 ] 00:13:32.402 }' 00:13:32.402 03:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.402 03:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.970 03:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.970 03:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.259 03:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:13:33.259 03:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.544 [2024-05-15 03:08:04.613266] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.544 BaseBdev1 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:33.544 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.803 03:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.062 [ 00:13:34.062 { 00:13:34.062 "name": "BaseBdev1", 00:13:34.062 "aliases": [ 00:13:34.062 "21e84e93-175c-4b91-9a32-56d1a20b03c7" 00:13:34.062 ], 00:13:34.062 "product_name": "Malloc disk", 00:13:34.062 "block_size": 512, 00:13:34.062 "num_blocks": 65536, 00:13:34.062 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:34.062 "assigned_rate_limits": { 00:13:34.062 "rw_ios_per_sec": 0, 00:13:34.062 "rw_mbytes_per_sec": 0, 00:13:34.062 "r_mbytes_per_sec": 0, 00:13:34.062 "w_mbytes_per_sec": 0 00:13:34.062 }, 00:13:34.062 "claimed": true, 00:13:34.062 "claim_type": "exclusive_write", 00:13:34.062 "zoned": false, 00:13:34.062 "supported_io_types": { 00:13:34.062 "read": true, 00:13:34.062 "write": true, 00:13:34.062 "unmap": true, 00:13:34.062 "write_zeroes": true, 00:13:34.062 "flush": true, 00:13:34.062 "reset": true, 00:13:34.062 "compare": false, 00:13:34.062 "compare_and_write": false, 00:13:34.062 "abort": true, 00:13:34.062 "nvme_admin": false, 00:13:34.062 "nvme_io": false 00:13:34.062 }, 00:13:34.062 "memory_domains": [ 00:13:34.062 { 00:13:34.062 "dma_device_id": "system", 00:13:34.062 "dma_device_type": 1 00:13:34.062 }, 00:13:34.062 { 00:13:34.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.062 "dma_device_type": 2 00:13:34.062 } 00:13:34.062 ], 00:13:34.062 "driver_specific": {} 00:13:34.062 } 00:13:34.062 ] 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.062 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.320 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.320 "name": "Existed_Raid", 00:13:34.320 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:34.320 "strip_size_kb": 64, 00:13:34.320 "state": "configuring", 00:13:34.320 "raid_level": "concat", 00:13:34.320 "superblock": true, 00:13:34.320 "num_base_bdevs": 3, 00:13:34.320 "num_base_bdevs_discovered": 2, 00:13:34.320 "num_base_bdevs_operational": 3, 00:13:34.320 "base_bdevs_list": [ 00:13:34.320 { 00:13:34.320 "name": "BaseBdev1", 00:13:34.320 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:34.320 "is_configured": true, 00:13:34.320 "data_offset": 2048, 00:13:34.320 "data_size": 63488 00:13:34.320 }, 00:13:34.320 { 00:13:34.320 "name": null, 00:13:34.320 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:34.320 "is_configured": false, 00:13:34.320 "data_offset": 2048, 00:13:34.320 "data_size": 63488 00:13:34.320 }, 00:13:34.320 { 00:13:34.320 "name": "BaseBdev3", 00:13:34.320 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:34.320 "is_configured": true, 00:13:34.320 "data_offset": 2048, 00:13:34.320 "data_size": 63488 00:13:34.320 } 00:13:34.320 ] 00:13:34.320 }' 00:13:34.320 03:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.320 03:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.887 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.887 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:35.145 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:35.145 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:35.404 [2024-05-15 03:08:06.506378] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.404 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.662 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:35.662 "name": "Existed_Raid", 00:13:35.662 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:35.662 "strip_size_kb": 64, 00:13:35.662 "state": "configuring", 00:13:35.662 "raid_level": "concat", 00:13:35.662 "superblock": true, 00:13:35.662 "num_base_bdevs": 3, 00:13:35.662 "num_base_bdevs_discovered": 1, 00:13:35.662 "num_base_bdevs_operational": 3, 00:13:35.662 "base_bdevs_list": [ 00:13:35.662 { 00:13:35.662 "name": "BaseBdev1", 00:13:35.662 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:35.662 "is_configured": true, 00:13:35.662 "data_offset": 2048, 00:13:35.662 "data_size": 63488 00:13:35.662 }, 00:13:35.662 { 00:13:35.662 "name": null, 00:13:35.662 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:35.663 "is_configured": false, 00:13:35.663 "data_offset": 2048, 00:13:35.663 "data_size": 63488 00:13:35.663 }, 00:13:35.663 { 00:13:35.663 "name": null, 00:13:35.663 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:35.663 "is_configured": false, 00:13:35.663 "data_offset": 2048, 00:13:35.663 "data_size": 63488 00:13:35.663 } 00:13:35.663 ] 00:13:35.663 }' 00:13:35.663 03:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:35.663 03:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.599 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.599 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:36.599 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:13:36.599 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:36.858 [2024-05-15 03:08:07.890095] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.858 03:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.117 03:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:37.117 "name": "Existed_Raid", 00:13:37.117 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:37.117 "strip_size_kb": 64, 00:13:37.117 "state": "configuring", 00:13:37.117 "raid_level": "concat", 00:13:37.117 "superblock": true, 00:13:37.117 "num_base_bdevs": 3, 00:13:37.117 "num_base_bdevs_discovered": 2, 00:13:37.117 "num_base_bdevs_operational": 3, 00:13:37.117 "base_bdevs_list": [ 00:13:37.117 { 00:13:37.117 "name": "BaseBdev1", 00:13:37.117 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:37.117 "is_configured": true, 00:13:37.117 "data_offset": 2048, 00:13:37.117 "data_size": 63488 00:13:37.117 }, 00:13:37.117 { 00:13:37.117 "name": null, 00:13:37.117 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:37.117 "is_configured": false, 00:13:37.117 "data_offset": 2048, 00:13:37.117 "data_size": 63488 00:13:37.117 }, 00:13:37.117 { 00:13:37.117 "name": "BaseBdev3", 00:13:37.117 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:37.117 "is_configured": true, 00:13:37.117 "data_offset": 2048, 00:13:37.117 "data_size": 63488 00:13:37.117 } 00:13:37.117 ] 00:13:37.117 }' 00:13:37.117 03:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:37.117 03:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 03:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.708 03:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.965 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:13:37.965 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:38.223 [2024-05-15 03:08:09.265794] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.223 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.481 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:38.481 "name": "Existed_Raid", 00:13:38.481 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:38.481 "strip_size_kb": 64, 00:13:38.481 "state": "configuring", 00:13:38.481 "raid_level": "concat", 00:13:38.481 "superblock": true, 00:13:38.481 "num_base_bdevs": 3, 00:13:38.481 "num_base_bdevs_discovered": 1, 00:13:38.481 "num_base_bdevs_operational": 3, 00:13:38.481 "base_bdevs_list": [ 00:13:38.481 { 00:13:38.481 "name": null, 00:13:38.481 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:38.481 "is_configured": false, 00:13:38.481 "data_offset": 2048, 00:13:38.481 "data_size": 63488 00:13:38.481 }, 00:13:38.481 { 00:13:38.481 "name": null, 00:13:38.481 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:38.481 "is_configured": false, 00:13:38.481 "data_offset": 2048, 00:13:38.481 "data_size": 63488 00:13:38.481 }, 00:13:38.481 { 00:13:38.481 "name": "BaseBdev3", 00:13:38.481 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:38.481 "is_configured": true, 00:13:38.481 "data_offset": 2048, 00:13:38.481 "data_size": 63488 00:13:38.481 } 00:13:38.481 ] 00:13:38.481 }' 00:13:38.481 03:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:38.481 03:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.282 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.282 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.282 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:13:39.283 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:39.539 [2024-05-15 03:08:10.664016] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.539 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.540 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.797 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.797 "name": "Existed_Raid", 00:13:39.797 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:39.797 "strip_size_kb": 64, 00:13:39.797 "state": "configuring", 00:13:39.797 "raid_level": "concat", 00:13:39.797 "superblock": true, 00:13:39.797 "num_base_bdevs": 3, 00:13:39.797 "num_base_bdevs_discovered": 2, 00:13:39.797 "num_base_bdevs_operational": 3, 00:13:39.797 "base_bdevs_list": [ 00:13:39.797 { 00:13:39.797 "name": null, 00:13:39.797 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:39.797 "is_configured": false, 00:13:39.797 "data_offset": 2048, 00:13:39.797 "data_size": 63488 00:13:39.797 }, 00:13:39.797 { 00:13:39.797 "name": "BaseBdev2", 00:13:39.797 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:39.797 "is_configured": true, 00:13:39.797 "data_offset": 2048, 00:13:39.797 "data_size": 63488 00:13:39.797 }, 00:13:39.797 { 00:13:39.797 "name": "BaseBdev3", 00:13:39.797 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:39.797 "is_configured": true, 00:13:39.797 "data_offset": 2048, 00:13:39.797 "data_size": 63488 00:13:39.797 } 00:13:39.797 ] 00:13:39.797 }' 00:13:39.797 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.797 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.732 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.732 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:40.732 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:13:40.732 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.732 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:40.997 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 21e84e93-175c-4b91-9a32-56d1a20b03c7 00:13:41.258 [2024-05-15 03:08:12.283518] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.258 [2024-05-15 03:08:12.283670] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x245bc20 00:13:41.258 [2024-05-15 03:08:12.283682] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:41.258 [2024-05-15 03:08:12.283873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26027a0 00:13:41.258 [2024-05-15 03:08:12.283995] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x245bc20 00:13:41.258 [2024-05-15 03:08:12.284004] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x245bc20 00:13:41.258 [2024-05-15 03:08:12.284094] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.258 NewBaseBdev 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:41.258 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.516 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:41.775 [ 00:13:41.775 { 00:13:41.775 "name": "NewBaseBdev", 00:13:41.775 "aliases": [ 00:13:41.775 "21e84e93-175c-4b91-9a32-56d1a20b03c7" 00:13:41.775 ], 00:13:41.775 "product_name": "Malloc disk", 00:13:41.775 "block_size": 512, 00:13:41.775 "num_blocks": 65536, 00:13:41.775 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:41.775 "assigned_rate_limits": { 00:13:41.775 "rw_ios_per_sec": 0, 00:13:41.775 "rw_mbytes_per_sec": 0, 00:13:41.775 "r_mbytes_per_sec": 0, 00:13:41.775 "w_mbytes_per_sec": 0 00:13:41.775 }, 00:13:41.775 "claimed": true, 00:13:41.775 "claim_type": "exclusive_write", 00:13:41.775 "zoned": false, 00:13:41.775 "supported_io_types": { 00:13:41.775 "read": true, 00:13:41.775 "write": true, 00:13:41.775 "unmap": true, 00:13:41.775 "write_zeroes": true, 00:13:41.775 "flush": true, 00:13:41.775 "reset": true, 00:13:41.775 "compare": false, 00:13:41.775 "compare_and_write": false, 00:13:41.775 "abort": true, 00:13:41.775 "nvme_admin": false, 00:13:41.775 "nvme_io": false 00:13:41.775 }, 00:13:41.775 "memory_domains": [ 00:13:41.775 { 00:13:41.775 "dma_device_id": "system", 00:13:41.775 "dma_device_type": 1 00:13:41.775 }, 00:13:41.775 { 00:13:41.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.775 "dma_device_type": 2 00:13:41.775 } 00:13:41.775 ], 00:13:41.775 "driver_specific": {} 00:13:41.775 } 00:13:41.775 ] 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.775 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.033 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.033 "name": "Existed_Raid", 00:13:42.033 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:42.033 "strip_size_kb": 64, 00:13:42.033 "state": "online", 00:13:42.033 "raid_level": "concat", 00:13:42.033 "superblock": true, 00:13:42.033 "num_base_bdevs": 3, 00:13:42.033 "num_base_bdevs_discovered": 3, 00:13:42.033 "num_base_bdevs_operational": 3, 00:13:42.033 "base_bdevs_list": [ 00:13:42.033 { 00:13:42.033 "name": "NewBaseBdev", 00:13:42.033 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:42.033 "is_configured": true, 00:13:42.033 "data_offset": 2048, 00:13:42.033 "data_size": 63488 00:13:42.033 }, 00:13:42.033 { 00:13:42.033 "name": "BaseBdev2", 00:13:42.033 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:42.033 "is_configured": true, 00:13:42.033 "data_offset": 2048, 00:13:42.033 "data_size": 63488 00:13:42.033 }, 00:13:42.033 { 00:13:42.033 "name": "BaseBdev3", 00:13:42.033 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:42.033 "is_configured": true, 00:13:42.033 "data_offset": 2048, 00:13:42.033 "data_size": 63488 00:13:42.033 } 00:13:42.033 ] 00:13:42.033 }' 00:13:42.033 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.033 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:42.600 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:42.859 [2024-05-15 03:08:13.912181] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.859 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:42.859 "name": "Existed_Raid", 00:13:42.859 "aliases": [ 00:13:42.859 "15e04061-fce1-4127-a2dc-830385f819fe" 00:13:42.859 ], 00:13:42.859 "product_name": "Raid Volume", 00:13:42.859 "block_size": 512, 00:13:42.859 "num_blocks": 190464, 00:13:42.859 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:42.859 "assigned_rate_limits": { 00:13:42.859 "rw_ios_per_sec": 0, 00:13:42.859 "rw_mbytes_per_sec": 0, 00:13:42.859 "r_mbytes_per_sec": 0, 00:13:42.859 "w_mbytes_per_sec": 0 00:13:42.859 }, 00:13:42.859 "claimed": false, 00:13:42.859 "zoned": false, 00:13:42.859 "supported_io_types": { 00:13:42.859 "read": true, 00:13:42.859 "write": true, 00:13:42.859 "unmap": true, 00:13:42.859 "write_zeroes": true, 00:13:42.859 "flush": true, 00:13:42.859 "reset": true, 00:13:42.859 "compare": false, 00:13:42.859 "compare_and_write": false, 00:13:42.859 "abort": false, 00:13:42.859 "nvme_admin": false, 00:13:42.859 "nvme_io": false 00:13:42.859 }, 00:13:42.859 "memory_domains": [ 00:13:42.859 { 00:13:42.859 "dma_device_id": "system", 00:13:42.859 "dma_device_type": 1 00:13:42.859 }, 00:13:42.859 { 00:13:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.859 "dma_device_type": 2 00:13:42.859 }, 00:13:42.859 { 00:13:42.859 "dma_device_id": "system", 00:13:42.859 "dma_device_type": 1 00:13:42.859 }, 00:13:42.859 { 00:13:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.859 "dma_device_type": 2 00:13:42.859 }, 00:13:42.859 { 00:13:42.859 "dma_device_id": "system", 00:13:42.859 "dma_device_type": 1 00:13:42.859 }, 00:13:42.859 { 00:13:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.859 "dma_device_type": 2 00:13:42.859 } 00:13:42.859 ], 00:13:42.859 "driver_specific": { 00:13:42.859 "raid": { 00:13:42.859 "uuid": "15e04061-fce1-4127-a2dc-830385f819fe", 00:13:42.859 "strip_size_kb": 64, 00:13:42.859 "state": "online", 00:13:42.859 "raid_level": "concat", 00:13:42.859 "superblock": true, 00:13:42.859 "num_base_bdevs": 3, 00:13:42.859 "num_base_bdevs_discovered": 3, 00:13:42.859 "num_base_bdevs_operational": 3, 00:13:42.859 "base_bdevs_list": [ 00:13:42.859 { 00:13:42.859 "name": "NewBaseBdev", 00:13:42.860 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:42.860 "is_configured": true, 00:13:42.860 "data_offset": 2048, 00:13:42.860 "data_size": 63488 00:13:42.860 }, 00:13:42.860 { 00:13:42.860 "name": "BaseBdev2", 00:13:42.860 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:42.860 "is_configured": true, 00:13:42.860 "data_offset": 2048, 00:13:42.860 "data_size": 63488 00:13:42.860 }, 00:13:42.860 { 00:13:42.860 "name": "BaseBdev3", 00:13:42.860 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:42.860 "is_configured": true, 00:13:42.860 "data_offset": 2048, 00:13:42.860 "data_size": 63488 00:13:42.860 } 00:13:42.860 ] 00:13:42.860 } 00:13:42.860 } 00:13:42.860 }' 00:13:42.860 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.860 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:13:42.860 BaseBdev2 00:13:42.860 BaseBdev3' 00:13:42.860 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:42.860 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:42.860 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:43.118 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:43.118 "name": "NewBaseBdev", 00:13:43.118 "aliases": [ 00:13:43.118 "21e84e93-175c-4b91-9a32-56d1a20b03c7" 00:13:43.118 ], 00:13:43.118 "product_name": "Malloc disk", 00:13:43.118 "block_size": 512, 00:13:43.118 "num_blocks": 65536, 00:13:43.118 "uuid": "21e84e93-175c-4b91-9a32-56d1a20b03c7", 00:13:43.118 "assigned_rate_limits": { 00:13:43.118 "rw_ios_per_sec": 0, 00:13:43.118 "rw_mbytes_per_sec": 0, 00:13:43.118 "r_mbytes_per_sec": 0, 00:13:43.118 "w_mbytes_per_sec": 0 00:13:43.119 }, 00:13:43.119 "claimed": true, 00:13:43.119 "claim_type": "exclusive_write", 00:13:43.119 "zoned": false, 00:13:43.119 "supported_io_types": { 00:13:43.119 "read": true, 00:13:43.119 "write": true, 00:13:43.119 "unmap": true, 00:13:43.119 "write_zeroes": true, 00:13:43.119 "flush": true, 00:13:43.119 "reset": true, 00:13:43.119 "compare": false, 00:13:43.119 "compare_and_write": false, 00:13:43.119 "abort": true, 00:13:43.119 "nvme_admin": false, 00:13:43.119 "nvme_io": false 00:13:43.119 }, 00:13:43.119 "memory_domains": [ 00:13:43.119 { 00:13:43.119 "dma_device_id": "system", 00:13:43.119 "dma_device_type": 1 00:13:43.119 }, 00:13:43.119 { 00:13:43.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.119 "dma_device_type": 2 00:13:43.119 } 00:13:43.119 ], 00:13:43.119 "driver_specific": {} 00:13:43.119 }' 00:13:43.119 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.377 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.636 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.636 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:43.636 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:43.636 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:43.636 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:43.895 "name": "BaseBdev2", 00:13:43.895 "aliases": [ 00:13:43.895 "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa" 00:13:43.895 ], 00:13:43.895 "product_name": "Malloc disk", 00:13:43.895 "block_size": 512, 00:13:43.895 "num_blocks": 65536, 00:13:43.895 "uuid": "1f9e8ecd-9467-45b9-9aaf-2e729d2bd8aa", 00:13:43.895 "assigned_rate_limits": { 00:13:43.895 "rw_ios_per_sec": 0, 00:13:43.895 "rw_mbytes_per_sec": 0, 00:13:43.895 "r_mbytes_per_sec": 0, 00:13:43.895 "w_mbytes_per_sec": 0 00:13:43.895 }, 00:13:43.895 "claimed": true, 00:13:43.895 "claim_type": "exclusive_write", 00:13:43.895 "zoned": false, 00:13:43.895 "supported_io_types": { 00:13:43.895 "read": true, 00:13:43.895 "write": true, 00:13:43.895 "unmap": true, 00:13:43.895 "write_zeroes": true, 00:13:43.895 "flush": true, 00:13:43.895 "reset": true, 00:13:43.895 "compare": false, 00:13:43.895 "compare_and_write": false, 00:13:43.895 "abort": true, 00:13:43.895 "nvme_admin": false, 00:13:43.895 "nvme_io": false 00:13:43.895 }, 00:13:43.895 "memory_domains": [ 00:13:43.895 { 00:13:43.895 "dma_device_id": "system", 00:13:43.895 "dma_device_type": 1 00:13:43.895 }, 00:13:43.895 { 00:13:43.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.895 "dma_device_type": 2 00:13:43.895 } 00:13:43.895 ], 00:13:43.895 "driver_specific": {} 00:13:43.895 }' 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.895 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.895 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.895 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:44.153 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:44.154 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:44.412 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:44.412 "name": "BaseBdev3", 00:13:44.412 "aliases": [ 00:13:44.412 "4006a284-c9f8-4b55-bb97-22fc4732d6c0" 00:13:44.412 ], 00:13:44.412 "product_name": "Malloc disk", 00:13:44.412 "block_size": 512, 00:13:44.412 "num_blocks": 65536, 00:13:44.412 "uuid": "4006a284-c9f8-4b55-bb97-22fc4732d6c0", 00:13:44.412 "assigned_rate_limits": { 00:13:44.412 "rw_ios_per_sec": 0, 00:13:44.412 "rw_mbytes_per_sec": 0, 00:13:44.412 "r_mbytes_per_sec": 0, 00:13:44.412 "w_mbytes_per_sec": 0 00:13:44.412 }, 00:13:44.412 "claimed": true, 00:13:44.412 "claim_type": "exclusive_write", 00:13:44.412 "zoned": false, 00:13:44.412 "supported_io_types": { 00:13:44.412 "read": true, 00:13:44.412 "write": true, 00:13:44.412 "unmap": true, 00:13:44.412 "write_zeroes": true, 00:13:44.412 "flush": true, 00:13:44.412 "reset": true, 00:13:44.412 "compare": false, 00:13:44.412 "compare_and_write": false, 00:13:44.412 "abort": true, 00:13:44.412 "nvme_admin": false, 00:13:44.412 "nvme_io": false 00:13:44.412 }, 00:13:44.412 "memory_domains": [ 00:13:44.412 { 00:13:44.412 "dma_device_id": "system", 00:13:44.412 "dma_device_type": 1 00:13:44.412 }, 00:13:44.412 { 00:13:44.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.412 "dma_device_type": 2 00:13:44.412 } 00:13:44.412 ], 00:13:44.412 "driver_specific": {} 00:13:44.412 }' 00:13:44.412 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:44.412 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:44.412 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:44.412 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:44.670 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:44.929 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:44.929 03:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.929 [2024-05-15 03:08:16.073676] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.929 [2024-05-15 03:08:16.073699] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.929 [2024-05-15 03:08:16.073748] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.929 [2024-05-15 03:08:16.073800] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.929 [2024-05-15 03:08:16.073809] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x245bc20 name Existed_Raid, state offline 00:13:45.187 03:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4079072 00:13:45.187 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4079072 ']' 00:13:45.187 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4079072 00:13:45.187 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:45.187 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4079072 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4079072' 00:13:45.188 killing process with pid 4079072 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4079072 00:13:45.188 [2024-05-15 03:08:16.136821] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.188 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4079072 00:13:45.188 [2024-05-15 03:08:16.161672] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.446 03:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:45.447 00:13:45.447 real 0m29.604s 00:13:45.447 user 0m55.456s 00:13:45.447 sys 0m4.168s 00:13:45.447 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.447 03:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 ************************************ 00:13:45.447 END TEST raid_state_function_test_sb 00:13:45.447 ************************************ 00:13:45.447 03:08:16 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:45.447 03:08:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:45.447 03:08:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.447 03:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 ************************************ 00:13:45.447 START TEST raid_superblock_test 00:13:45.447 ************************************ 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4084440 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4084440 /var/tmp/spdk-raid.sock 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4084440 ']' 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:45.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.447 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.447 [2024-05-15 03:08:16.517313] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:45.447 [2024-05-15 03:08:16.517368] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084440 ] 00:13:45.705 [2024-05-15 03:08:16.613599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.706 [2024-05-15 03:08:16.707700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.706 [2024-05-15 03:08:16.771702] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.706 [2024-05-15 03:08:16.771737] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:46.639 malloc1 00:13:46.639 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.898 [2024-05-15 03:08:17.970087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.898 [2024-05-15 03:08:17.970138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.898 [2024-05-15 03:08:17.970159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20eba00 00:13:46.898 [2024-05-15 03:08:17.970169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.898 [2024-05-15 03:08:17.971921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.898 [2024-05-15 03:08:17.971949] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.898 pt1 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.898 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:47.156 malloc2 00:13:47.156 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:47.414 [2024-05-15 03:08:18.484158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:47.414 [2024-05-15 03:08:18.484200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.414 [2024-05-15 03:08:18.484220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ec5f0 00:13:47.414 [2024-05-15 03:08:18.484230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.414 [2024-05-15 03:08:18.485785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.414 [2024-05-15 03:08:18.485811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:47.414 pt2 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:47.414 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:47.415 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:47.415 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:47.415 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:47.673 malloc3 00:13:47.673 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:47.931 [2024-05-15 03:08:18.998130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:47.931 [2024-05-15 03:08:18.998173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.931 [2024-05-15 03:08:18.998190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2291900 00:13:47.931 [2024-05-15 03:08:18.998199] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.931 [2024-05-15 03:08:18.999762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.931 [2024-05-15 03:08:18.999790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:47.931 pt3 00:13:47.931 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:47.931 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:47.931 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:48.188 [2024-05-15 03:08:19.242787] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:48.188 [2024-05-15 03:08:19.244088] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.188 [2024-05-15 03:08:19.244144] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:48.188 [2024-05-15 03:08:19.244299] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x22940f0 00:13:48.188 [2024-05-15 03:08:19.244309] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:48.188 [2024-05-15 03:08:19.244501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20ecf30 00:13:48.188 [2024-05-15 03:08:19.244648] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22940f0 00:13:48.188 [2024-05-15 03:08:19.244657] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22940f0 00:13:48.188 [2024-05-15 03:08:19.244752] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.188 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.446 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.446 "name": "raid_bdev1", 00:13:48.446 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:48.446 "strip_size_kb": 64, 00:13:48.446 "state": "online", 00:13:48.446 "raid_level": "concat", 00:13:48.446 "superblock": true, 00:13:48.446 "num_base_bdevs": 3, 00:13:48.446 "num_base_bdevs_discovered": 3, 00:13:48.446 "num_base_bdevs_operational": 3, 00:13:48.446 "base_bdevs_list": [ 00:13:48.446 { 00:13:48.446 "name": "pt1", 00:13:48.446 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:48.446 "is_configured": true, 00:13:48.446 "data_offset": 2048, 00:13:48.446 "data_size": 63488 00:13:48.446 }, 00:13:48.446 { 00:13:48.446 "name": "pt2", 00:13:48.446 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:48.446 "is_configured": true, 00:13:48.446 "data_offset": 2048, 00:13:48.446 "data_size": 63488 00:13:48.446 }, 00:13:48.446 { 00:13:48.446 "name": "pt3", 00:13:48.446 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:48.446 "is_configured": true, 00:13:48.446 "data_offset": 2048, 00:13:48.446 "data_size": 63488 00:13:48.446 } 00:13:48.446 ] 00:13:48.446 }' 00:13:48.446 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.446 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:49.012 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:49.270 [2024-05-15 03:08:20.386097] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.270 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:49.270 "name": "raid_bdev1", 00:13:49.270 "aliases": [ 00:13:49.270 "78fda9cc-bf75-47c2-847e-03669abc2c62" 00:13:49.270 ], 00:13:49.270 "product_name": "Raid Volume", 00:13:49.270 "block_size": 512, 00:13:49.270 "num_blocks": 190464, 00:13:49.270 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:49.270 "assigned_rate_limits": { 00:13:49.270 "rw_ios_per_sec": 0, 00:13:49.270 "rw_mbytes_per_sec": 0, 00:13:49.270 "r_mbytes_per_sec": 0, 00:13:49.270 "w_mbytes_per_sec": 0 00:13:49.270 }, 00:13:49.270 "claimed": false, 00:13:49.270 "zoned": false, 00:13:49.270 "supported_io_types": { 00:13:49.270 "read": true, 00:13:49.270 "write": true, 00:13:49.270 "unmap": true, 00:13:49.270 "write_zeroes": true, 00:13:49.270 "flush": true, 00:13:49.270 "reset": true, 00:13:49.270 "compare": false, 00:13:49.270 "compare_and_write": false, 00:13:49.270 "abort": false, 00:13:49.270 "nvme_admin": false, 00:13:49.270 "nvme_io": false 00:13:49.270 }, 00:13:49.270 "memory_domains": [ 00:13:49.270 { 00:13:49.270 "dma_device_id": "system", 00:13:49.270 "dma_device_type": 1 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.270 "dma_device_type": 2 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "dma_device_id": "system", 00:13:49.270 "dma_device_type": 1 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.270 "dma_device_type": 2 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "dma_device_id": "system", 00:13:49.270 "dma_device_type": 1 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.270 "dma_device_type": 2 00:13:49.270 } 00:13:49.270 ], 00:13:49.270 "driver_specific": { 00:13:49.270 "raid": { 00:13:49.270 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:49.270 "strip_size_kb": 64, 00:13:49.270 "state": "online", 00:13:49.270 "raid_level": "concat", 00:13:49.270 "superblock": true, 00:13:49.270 "num_base_bdevs": 3, 00:13:49.270 "num_base_bdevs_discovered": 3, 00:13:49.270 "num_base_bdevs_operational": 3, 00:13:49.270 "base_bdevs_list": [ 00:13:49.270 { 00:13:49.270 "name": "pt1", 00:13:49.270 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:49.270 "is_configured": true, 00:13:49.270 "data_offset": 2048, 00:13:49.270 "data_size": 63488 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "name": "pt2", 00:13:49.270 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:49.270 "is_configured": true, 00:13:49.270 "data_offset": 2048, 00:13:49.270 "data_size": 63488 00:13:49.270 }, 00:13:49.270 { 00:13:49.270 "name": "pt3", 00:13:49.270 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:49.270 "is_configured": true, 00:13:49.270 "data_offset": 2048, 00:13:49.270 "data_size": 63488 00:13:49.270 } 00:13:49.270 ] 00:13:49.270 } 00:13:49.270 } 00:13:49.270 }' 00:13:49.270 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.529 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:49.529 pt2 00:13:49.529 pt3' 00:13:49.529 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:49.529 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:49.529 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:49.788 "name": "pt1", 00:13:49.788 "aliases": [ 00:13:49.788 "f17cf858-41aa-5566-96c0-cc0d43692303" 00:13:49.788 ], 00:13:49.788 "product_name": "passthru", 00:13:49.788 "block_size": 512, 00:13:49.788 "num_blocks": 65536, 00:13:49.788 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:49.788 "assigned_rate_limits": { 00:13:49.788 "rw_ios_per_sec": 0, 00:13:49.788 "rw_mbytes_per_sec": 0, 00:13:49.788 "r_mbytes_per_sec": 0, 00:13:49.788 "w_mbytes_per_sec": 0 00:13:49.788 }, 00:13:49.788 "claimed": true, 00:13:49.788 "claim_type": "exclusive_write", 00:13:49.788 "zoned": false, 00:13:49.788 "supported_io_types": { 00:13:49.788 "read": true, 00:13:49.788 "write": true, 00:13:49.788 "unmap": true, 00:13:49.788 "write_zeroes": true, 00:13:49.788 "flush": true, 00:13:49.788 "reset": true, 00:13:49.788 "compare": false, 00:13:49.788 "compare_and_write": false, 00:13:49.788 "abort": true, 00:13:49.788 "nvme_admin": false, 00:13:49.788 "nvme_io": false 00:13:49.788 }, 00:13:49.788 "memory_domains": [ 00:13:49.788 { 00:13:49.788 "dma_device_id": "system", 00:13:49.788 "dma_device_type": 1 00:13:49.788 }, 00:13:49.788 { 00:13:49.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.788 "dma_device_type": 2 00:13:49.788 } 00:13:49.788 ], 00:13:49.788 "driver_specific": { 00:13:49.788 "passthru": { 00:13:49.788 "name": "pt1", 00:13:49.788 "base_bdev_name": "malloc1" 00:13:49.788 } 00:13:49.788 } 00:13:49.788 }' 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.788 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.055 03:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:50.055 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:50.362 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:50.362 "name": "pt2", 00:13:50.362 "aliases": [ 00:13:50.362 "2b66aad8-1917-57f5-8954-ac72cc8facfb" 00:13:50.362 ], 00:13:50.362 "product_name": "passthru", 00:13:50.363 "block_size": 512, 00:13:50.363 "num_blocks": 65536, 00:13:50.363 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:50.363 "assigned_rate_limits": { 00:13:50.363 "rw_ios_per_sec": 0, 00:13:50.363 "rw_mbytes_per_sec": 0, 00:13:50.363 "r_mbytes_per_sec": 0, 00:13:50.363 "w_mbytes_per_sec": 0 00:13:50.363 }, 00:13:50.363 "claimed": true, 00:13:50.363 "claim_type": "exclusive_write", 00:13:50.363 "zoned": false, 00:13:50.363 "supported_io_types": { 00:13:50.363 "read": true, 00:13:50.363 "write": true, 00:13:50.363 "unmap": true, 00:13:50.363 "write_zeroes": true, 00:13:50.363 "flush": true, 00:13:50.363 "reset": true, 00:13:50.363 "compare": false, 00:13:50.363 "compare_and_write": false, 00:13:50.363 "abort": true, 00:13:50.363 "nvme_admin": false, 00:13:50.363 "nvme_io": false 00:13:50.363 }, 00:13:50.363 "memory_domains": [ 00:13:50.363 { 00:13:50.363 "dma_device_id": "system", 00:13:50.363 "dma_device_type": 1 00:13:50.363 }, 00:13:50.363 { 00:13:50.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.363 "dma_device_type": 2 00:13:50.363 } 00:13:50.363 ], 00:13:50.363 "driver_specific": { 00:13:50.363 "passthru": { 00:13:50.363 "name": "pt2", 00:13:50.363 "base_bdev_name": "malloc2" 00:13:50.363 } 00:13:50.363 } 00:13:50.363 }' 00:13:50.363 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.363 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.363 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:50.363 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.363 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:50.621 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:50.879 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:50.879 "name": "pt3", 00:13:50.879 "aliases": [ 00:13:50.879 "d202fb61-f9f5-51ed-a038-a15892a02775" 00:13:50.879 ], 00:13:50.879 "product_name": "passthru", 00:13:50.879 "block_size": 512, 00:13:50.879 "num_blocks": 65536, 00:13:50.879 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:50.879 "assigned_rate_limits": { 00:13:50.879 "rw_ios_per_sec": 0, 00:13:50.879 "rw_mbytes_per_sec": 0, 00:13:50.879 "r_mbytes_per_sec": 0, 00:13:50.879 "w_mbytes_per_sec": 0 00:13:50.879 }, 00:13:50.879 "claimed": true, 00:13:50.879 "claim_type": "exclusive_write", 00:13:50.879 "zoned": false, 00:13:50.879 "supported_io_types": { 00:13:50.879 "read": true, 00:13:50.879 "write": true, 00:13:50.879 "unmap": true, 00:13:50.879 "write_zeroes": true, 00:13:50.879 "flush": true, 00:13:50.879 "reset": true, 00:13:50.879 "compare": false, 00:13:50.879 "compare_and_write": false, 00:13:50.879 "abort": true, 00:13:50.879 "nvme_admin": false, 00:13:50.879 "nvme_io": false 00:13:50.879 }, 00:13:50.879 "memory_domains": [ 00:13:50.879 { 00:13:50.879 "dma_device_id": "system", 00:13:50.879 "dma_device_type": 1 00:13:50.879 }, 00:13:50.879 { 00:13:50.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.879 "dma_device_type": 2 00:13:50.879 } 00:13:50.879 ], 00:13:50.879 "driver_specific": { 00:13:50.879 "passthru": { 00:13:50.879 "name": "pt3", 00:13:50.879 "base_bdev_name": "malloc3" 00:13:50.879 } 00:13:50.879 } 00:13:50.879 }' 00:13:50.879 03:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.879 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.137 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:51.394 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:51.394 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:51.394 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.394 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:51.652 [2024-05-15 03:08:22.571938] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.652 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78fda9cc-bf75-47c2-847e-03669abc2c62 00:13:51.652 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 78fda9cc-bf75-47c2-847e-03669abc2c62 ']' 00:13:51.652 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:51.910 [2024-05-15 03:08:22.824332] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.910 [2024-05-15 03:08:22.824349] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.910 [2024-05-15 03:08:22.824391] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.910 [2024-05-15 03:08:22.824441] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.910 [2024-05-15 03:08:22.824449] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22940f0 name raid_bdev1, state offline 00:13:51.910 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.910 03:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:52.169 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:52.169 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:52.169 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:52.169 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:52.427 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:52.427 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:52.686 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:52.686 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:52.944 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:52.944 03:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:53.203 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:53.203 [2024-05-15 03:08:24.348319] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:53.203 [2024-05-15 03:08:24.349744] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:53.203 [2024-05-15 03:08:24.349787] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:53.203 [2024-05-15 03:08:24.349833] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:53.203 [2024-05-15 03:08:24.349876] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:53.203 [2024-05-15 03:08:24.349903] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:53.203 [2024-05-15 03:08:24.349918] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.203 [2024-05-15 03:08:24.349925] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x229d0f0 name raid_bdev1, state configuring 00:13:53.203 request: 00:13:53.203 { 00:13:53.203 "name": "raid_bdev1", 00:13:53.203 "raid_level": "concat", 00:13:53.203 "base_bdevs": [ 00:13:53.203 "malloc1", 00:13:53.203 "malloc2", 00:13:53.203 "malloc3" 00:13:53.203 ], 00:13:53.203 "superblock": false, 00:13:53.203 "strip_size_kb": 64, 00:13:53.203 "method": "bdev_raid_create", 00:13:53.203 "req_id": 1 00:13:53.203 } 00:13:53.203 Got JSON-RPC error response 00:13:53.203 response: 00:13:53.203 { 00:13:53.203 "code": -17, 00:13:53.203 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:53.203 } 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:53.461 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.719 [2024-05-15 03:08:24.853610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.719 [2024-05-15 03:08:24.853650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.719 [2024-05-15 03:08:24.853667] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x228f320 00:13:53.719 [2024-05-15 03:08:24.853676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.719 [2024-05-15 03:08:24.855354] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.719 [2024-05-15 03:08:24.855380] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.719 [2024-05-15 03:08:24.855441] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:53.719 [2024-05-15 03:08:24.855465] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.719 pt1 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.719 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.977 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.977 03:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.977 03:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.977 "name": "raid_bdev1", 00:13:53.977 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:53.977 "strip_size_kb": 64, 00:13:53.977 "state": "configuring", 00:13:53.977 "raid_level": "concat", 00:13:53.977 "superblock": true, 00:13:53.977 "num_base_bdevs": 3, 00:13:53.977 "num_base_bdevs_discovered": 1, 00:13:53.977 "num_base_bdevs_operational": 3, 00:13:53.977 "base_bdevs_list": [ 00:13:53.977 { 00:13:53.977 "name": "pt1", 00:13:53.977 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:53.977 "is_configured": true, 00:13:53.977 "data_offset": 2048, 00:13:53.977 "data_size": 63488 00:13:53.977 }, 00:13:53.977 { 00:13:53.977 "name": null, 00:13:53.977 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:53.977 "is_configured": false, 00:13:53.977 "data_offset": 2048, 00:13:53.977 "data_size": 63488 00:13:53.977 }, 00:13:53.977 { 00:13:53.977 "name": null, 00:13:53.977 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:53.977 "is_configured": false, 00:13:53.977 "data_offset": 2048, 00:13:53.977 "data_size": 63488 00:13:53.977 } 00:13:53.977 ] 00:13:53.977 }' 00:13:53.977 03:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.977 03:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.542 03:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:54.542 03:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.800 [2024-05-15 03:08:25.916466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.800 [2024-05-15 03:08:25.916509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.800 [2024-05-15 03:08:25.916527] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e3c50 00:13:54.800 [2024-05-15 03:08:25.916537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.800 [2024-05-15 03:08:25.916869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.800 [2024-05-15 03:08:25.916884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.800 [2024-05-15 03:08:25.916938] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:54.800 [2024-05-15 03:08:25.916955] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.800 pt2 00:13:54.800 03:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:55.057 [2024-05-15 03:08:26.173167] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.057 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.315 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.315 "name": "raid_bdev1", 00:13:55.315 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:55.315 "strip_size_kb": 64, 00:13:55.315 "state": "configuring", 00:13:55.315 "raid_level": "concat", 00:13:55.315 "superblock": true, 00:13:55.315 "num_base_bdevs": 3, 00:13:55.315 "num_base_bdevs_discovered": 1, 00:13:55.315 "num_base_bdevs_operational": 3, 00:13:55.315 "base_bdevs_list": [ 00:13:55.315 { 00:13:55.315 "name": "pt1", 00:13:55.315 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:55.315 "is_configured": true, 00:13:55.315 "data_offset": 2048, 00:13:55.315 "data_size": 63488 00:13:55.315 }, 00:13:55.315 { 00:13:55.315 "name": null, 00:13:55.315 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:55.315 "is_configured": false, 00:13:55.315 "data_offset": 2048, 00:13:55.315 "data_size": 63488 00:13:55.315 }, 00:13:55.315 { 00:13:55.315 "name": null, 00:13:55.315 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:55.315 "is_configured": false, 00:13:55.315 "data_offset": 2048, 00:13:55.315 "data_size": 63488 00:13:55.315 } 00:13:55.315 ] 00:13:55.315 }' 00:13:55.315 03:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.315 03:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:56.248 [2024-05-15 03:08:27.320209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:56.248 [2024-05-15 03:08:27.320252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.248 [2024-05-15 03:08:27.320270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e33c0 00:13:56.248 [2024-05-15 03:08:27.320279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.248 [2024-05-15 03:08:27.320608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.248 [2024-05-15 03:08:27.320622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:56.248 [2024-05-15 03:08:27.320678] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:56.248 [2024-05-15 03:08:27.320695] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.248 pt2 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:56.248 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:56.507 [2024-05-15 03:08:27.576902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:56.507 [2024-05-15 03:08:27.576938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.507 [2024-05-15 03:08:27.576952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e2f10 00:13:56.507 [2024-05-15 03:08:27.576961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.507 [2024-05-15 03:08:27.577269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.507 [2024-05-15 03:08:27.577283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:56.507 [2024-05-15 03:08:27.577333] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:56.507 [2024-05-15 03:08:27.577348] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.507 [2024-05-15 03:08:27.577454] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x20e2480 00:13:56.507 [2024-05-15 03:08:27.577462] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:56.507 [2024-05-15 03:08:27.577640] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20e68a0 00:13:56.507 [2024-05-15 03:08:27.577770] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20e2480 00:13:56.507 [2024-05-15 03:08:27.577778] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20e2480 00:13:56.507 [2024-05-15 03:08:27.577883] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.507 pt3 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.507 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.765 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.765 "name": "raid_bdev1", 00:13:56.765 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:56.765 "strip_size_kb": 64, 00:13:56.765 "state": "online", 00:13:56.765 "raid_level": "concat", 00:13:56.765 "superblock": true, 00:13:56.765 "num_base_bdevs": 3, 00:13:56.765 "num_base_bdevs_discovered": 3, 00:13:56.765 "num_base_bdevs_operational": 3, 00:13:56.765 "base_bdevs_list": [ 00:13:56.765 { 00:13:56.765 "name": "pt1", 00:13:56.765 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:56.765 "is_configured": true, 00:13:56.765 "data_offset": 2048, 00:13:56.765 "data_size": 63488 00:13:56.765 }, 00:13:56.765 { 00:13:56.765 "name": "pt2", 00:13:56.765 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:56.765 "is_configured": true, 00:13:56.765 "data_offset": 2048, 00:13:56.765 "data_size": 63488 00:13:56.765 }, 00:13:56.765 { 00:13:56.765 "name": "pt3", 00:13:56.765 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:56.765 "is_configured": true, 00:13:56.765 "data_offset": 2048, 00:13:56.765 "data_size": 63488 00:13:56.765 } 00:13:56.765 ] 00:13:56.765 }' 00:13:56.765 03:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.765 03:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:57.332 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:57.590 [2024-05-15 03:08:28.724226] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.590 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:57.590 "name": "raid_bdev1", 00:13:57.590 "aliases": [ 00:13:57.590 "78fda9cc-bf75-47c2-847e-03669abc2c62" 00:13:57.590 ], 00:13:57.590 "product_name": "Raid Volume", 00:13:57.590 "block_size": 512, 00:13:57.590 "num_blocks": 190464, 00:13:57.590 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:57.590 "assigned_rate_limits": { 00:13:57.590 "rw_ios_per_sec": 0, 00:13:57.590 "rw_mbytes_per_sec": 0, 00:13:57.590 "r_mbytes_per_sec": 0, 00:13:57.590 "w_mbytes_per_sec": 0 00:13:57.590 }, 00:13:57.590 "claimed": false, 00:13:57.590 "zoned": false, 00:13:57.590 "supported_io_types": { 00:13:57.590 "read": true, 00:13:57.590 "write": true, 00:13:57.590 "unmap": true, 00:13:57.590 "write_zeroes": true, 00:13:57.590 "flush": true, 00:13:57.590 "reset": true, 00:13:57.590 "compare": false, 00:13:57.590 "compare_and_write": false, 00:13:57.590 "abort": false, 00:13:57.590 "nvme_admin": false, 00:13:57.590 "nvme_io": false 00:13:57.590 }, 00:13:57.590 "memory_domains": [ 00:13:57.590 { 00:13:57.590 "dma_device_id": "system", 00:13:57.590 "dma_device_type": 1 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.590 "dma_device_type": 2 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "dma_device_id": "system", 00:13:57.590 "dma_device_type": 1 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.590 "dma_device_type": 2 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "dma_device_id": "system", 00:13:57.590 "dma_device_type": 1 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.590 "dma_device_type": 2 00:13:57.590 } 00:13:57.590 ], 00:13:57.590 "driver_specific": { 00:13:57.590 "raid": { 00:13:57.590 "uuid": "78fda9cc-bf75-47c2-847e-03669abc2c62", 00:13:57.590 "strip_size_kb": 64, 00:13:57.590 "state": "online", 00:13:57.590 "raid_level": "concat", 00:13:57.590 "superblock": true, 00:13:57.590 "num_base_bdevs": 3, 00:13:57.590 "num_base_bdevs_discovered": 3, 00:13:57.590 "num_base_bdevs_operational": 3, 00:13:57.590 "base_bdevs_list": [ 00:13:57.590 { 00:13:57.590 "name": "pt1", 00:13:57.590 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:57.590 "is_configured": true, 00:13:57.590 "data_offset": 2048, 00:13:57.590 "data_size": 63488 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "name": "pt2", 00:13:57.590 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:57.591 "is_configured": true, 00:13:57.591 "data_offset": 2048, 00:13:57.591 "data_size": 63488 00:13:57.591 }, 00:13:57.591 { 00:13:57.591 "name": "pt3", 00:13:57.591 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:57.591 "is_configured": true, 00:13:57.591 "data_offset": 2048, 00:13:57.591 "data_size": 63488 00:13:57.591 } 00:13:57.591 ] 00:13:57.591 } 00:13:57.591 } 00:13:57.591 }' 00:13:57.849 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.849 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:57.849 pt2 00:13:57.849 pt3' 00:13:57.849 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:57.849 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:57.849 03:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:58.107 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:58.107 "name": "pt1", 00:13:58.107 "aliases": [ 00:13:58.107 "f17cf858-41aa-5566-96c0-cc0d43692303" 00:13:58.107 ], 00:13:58.107 "product_name": "passthru", 00:13:58.107 "block_size": 512, 00:13:58.107 "num_blocks": 65536, 00:13:58.107 "uuid": "f17cf858-41aa-5566-96c0-cc0d43692303", 00:13:58.107 "assigned_rate_limits": { 00:13:58.107 "rw_ios_per_sec": 0, 00:13:58.107 "rw_mbytes_per_sec": 0, 00:13:58.107 "r_mbytes_per_sec": 0, 00:13:58.107 "w_mbytes_per_sec": 0 00:13:58.107 }, 00:13:58.107 "claimed": true, 00:13:58.107 "claim_type": "exclusive_write", 00:13:58.107 "zoned": false, 00:13:58.107 "supported_io_types": { 00:13:58.107 "read": true, 00:13:58.107 "write": true, 00:13:58.107 "unmap": true, 00:13:58.107 "write_zeroes": true, 00:13:58.107 "flush": true, 00:13:58.107 "reset": true, 00:13:58.107 "compare": false, 00:13:58.107 "compare_and_write": false, 00:13:58.107 "abort": true, 00:13:58.107 "nvme_admin": false, 00:13:58.107 "nvme_io": false 00:13:58.107 }, 00:13:58.108 "memory_domains": [ 00:13:58.108 { 00:13:58.108 "dma_device_id": "system", 00:13:58.108 "dma_device_type": 1 00:13:58.108 }, 00:13:58.108 { 00:13:58.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.108 "dma_device_type": 2 00:13:58.108 } 00:13:58.108 ], 00:13:58.108 "driver_specific": { 00:13:58.108 "passthru": { 00:13:58.108 "name": "pt1", 00:13:58.108 "base_bdev_name": "malloc1" 00:13:58.108 } 00:13:58.108 } 00:13:58.108 }' 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.108 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:58.366 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:58.625 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:58.625 "name": "pt2", 00:13:58.625 "aliases": [ 00:13:58.625 "2b66aad8-1917-57f5-8954-ac72cc8facfb" 00:13:58.625 ], 00:13:58.625 "product_name": "passthru", 00:13:58.625 "block_size": 512, 00:13:58.625 "num_blocks": 65536, 00:13:58.625 "uuid": "2b66aad8-1917-57f5-8954-ac72cc8facfb", 00:13:58.625 "assigned_rate_limits": { 00:13:58.625 "rw_ios_per_sec": 0, 00:13:58.625 "rw_mbytes_per_sec": 0, 00:13:58.625 "r_mbytes_per_sec": 0, 00:13:58.625 "w_mbytes_per_sec": 0 00:13:58.625 }, 00:13:58.625 "claimed": true, 00:13:58.625 "claim_type": "exclusive_write", 00:13:58.625 "zoned": false, 00:13:58.625 "supported_io_types": { 00:13:58.625 "read": true, 00:13:58.625 "write": true, 00:13:58.625 "unmap": true, 00:13:58.625 "write_zeroes": true, 00:13:58.625 "flush": true, 00:13:58.625 "reset": true, 00:13:58.625 "compare": false, 00:13:58.625 "compare_and_write": false, 00:13:58.625 "abort": true, 00:13:58.625 "nvme_admin": false, 00:13:58.625 "nvme_io": false 00:13:58.625 }, 00:13:58.625 "memory_domains": [ 00:13:58.625 { 00:13:58.625 "dma_device_id": "system", 00:13:58.625 "dma_device_type": 1 00:13:58.625 }, 00:13:58.625 { 00:13:58.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.625 "dma_device_type": 2 00:13:58.625 } 00:13:58.625 ], 00:13:58.625 "driver_specific": { 00:13:58.625 "passthru": { 00:13:58.625 "name": "pt2", 00:13:58.625 "base_bdev_name": "malloc2" 00:13:58.625 } 00:13:58.625 } 00:13:58.625 }' 00:13:58.625 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.625 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.625 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:58.625 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.883 03:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:58.883 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.140 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:59.140 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:59.140 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:59.140 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:59.398 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:59.398 "name": "pt3", 00:13:59.398 "aliases": [ 00:13:59.398 "d202fb61-f9f5-51ed-a038-a15892a02775" 00:13:59.398 ], 00:13:59.398 "product_name": "passthru", 00:13:59.398 "block_size": 512, 00:13:59.398 "num_blocks": 65536, 00:13:59.398 "uuid": "d202fb61-f9f5-51ed-a038-a15892a02775", 00:13:59.398 "assigned_rate_limits": { 00:13:59.398 "rw_ios_per_sec": 0, 00:13:59.398 "rw_mbytes_per_sec": 0, 00:13:59.398 "r_mbytes_per_sec": 0, 00:13:59.398 "w_mbytes_per_sec": 0 00:13:59.398 }, 00:13:59.398 "claimed": true, 00:13:59.398 "claim_type": "exclusive_write", 00:13:59.398 "zoned": false, 00:13:59.399 "supported_io_types": { 00:13:59.399 "read": true, 00:13:59.399 "write": true, 00:13:59.399 "unmap": true, 00:13:59.399 "write_zeroes": true, 00:13:59.399 "flush": true, 00:13:59.399 "reset": true, 00:13:59.399 "compare": false, 00:13:59.399 "compare_and_write": false, 00:13:59.399 "abort": true, 00:13:59.399 "nvme_admin": false, 00:13:59.399 "nvme_io": false 00:13:59.399 }, 00:13:59.399 "memory_domains": [ 00:13:59.399 { 00:13:59.399 "dma_device_id": "system", 00:13:59.399 "dma_device_type": 1 00:13:59.399 }, 00:13:59.399 { 00:13:59.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.399 "dma_device_type": 2 00:13:59.399 } 00:13:59.399 ], 00:13:59.399 "driver_specific": { 00:13:59.399 "passthru": { 00:13:59.399 "name": "pt3", 00:13:59.399 "base_bdev_name": "malloc3" 00:13:59.399 } 00:13:59.399 } 00:13:59.399 }' 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.399 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:59.657 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:59.915 [2024-05-15 03:08:30.914135] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 78fda9cc-bf75-47c2-847e-03669abc2c62 '!=' 78fda9cc-bf75-47c2-847e-03669abc2c62 ']' 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4084440 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4084440 ']' 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4084440 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4084440 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4084440' 00:13:59.915 killing process with pid 4084440 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4084440 00:13:59.915 [2024-05-15 03:08:30.978430] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.915 [2024-05-15 03:08:30.978491] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.915 [2024-05-15 03:08:30.978545] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.915 [2024-05-15 03:08:30.978554] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20e2480 name raid_bdev1, state offline 00:13:59.915 03:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4084440 00:13:59.915 [2024-05-15 03:08:31.004426] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.174 03:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:14:00.174 00:14:00.174 real 0m14.767s 00:14:00.174 user 0m27.174s 00:14:00.174 sys 0m2.108s 00:14:00.174 03:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.174 03:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.174 ************************************ 00:14:00.174 END TEST raid_superblock_test 00:14:00.174 ************************************ 00:14:00.174 03:08:31 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:14:00.174 03:08:31 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:00.174 03:08:31 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:00.174 03:08:31 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.174 03:08:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.174 ************************************ 00:14:00.174 START TEST raid_state_function_test 00:14:00.174 ************************************ 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4087220 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4087220' 00:14:00.174 Process raid pid: 4087220 00:14:00.174 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4087220 /var/tmp/spdk-raid.sock 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4087220 ']' 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:00.175 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.433 [2024-05-15 03:08:31.367597] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:00.433 [2024-05-15 03:08:31.367653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.433 [2024-05-15 03:08:31.464865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.433 [2024-05-15 03:08:31.558906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.691 [2024-05-15 03:08:31.621891] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.691 [2024-05-15 03:08:31.621925] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.256 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.256 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:01.256 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:01.513 [2024-05-15 03:08:32.549327] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.513 [2024-05-15 03:08:32.549366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.513 [2024-05-15 03:08:32.549376] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.513 [2024-05-15 03:08:32.549385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.513 [2024-05-15 03:08:32.549393] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.513 [2024-05-15 03:08:32.549401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:01.513 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.514 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.771 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.771 "name": "Existed_Raid", 00:14:01.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.771 "strip_size_kb": 0, 00:14:01.771 "state": "configuring", 00:14:01.771 "raid_level": "raid1", 00:14:01.771 "superblock": false, 00:14:01.771 "num_base_bdevs": 3, 00:14:01.771 "num_base_bdevs_discovered": 0, 00:14:01.771 "num_base_bdevs_operational": 3, 00:14:01.771 "base_bdevs_list": [ 00:14:01.771 { 00:14:01.771 "name": "BaseBdev1", 00:14:01.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.771 "is_configured": false, 00:14:01.772 "data_offset": 0, 00:14:01.772 "data_size": 0 00:14:01.772 }, 00:14:01.772 { 00:14:01.772 "name": "BaseBdev2", 00:14:01.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.772 "is_configured": false, 00:14:01.772 "data_offset": 0, 00:14:01.772 "data_size": 0 00:14:01.772 }, 00:14:01.772 { 00:14:01.772 "name": "BaseBdev3", 00:14:01.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.772 "is_configured": false, 00:14:01.772 "data_offset": 0, 00:14:01.772 "data_size": 0 00:14:01.772 } 00:14:01.772 ] 00:14:01.772 }' 00:14:01.772 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.772 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.335 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:02.591 [2024-05-15 03:08:33.612048] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.591 [2024-05-15 03:08:33.612076] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c60de0 name Existed_Raid, state configuring 00:14:02.591 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:02.846 [2024-05-15 03:08:33.780504] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.846 [2024-05-15 03:08:33.780530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.846 [2024-05-15 03:08:33.780538] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.846 [2024-05-15 03:08:33.780547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.846 [2024-05-15 03:08:33.780554] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.846 [2024-05-15 03:08:33.780563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.846 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.846 [2024-05-15 03:08:33.954479] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.846 BaseBdev1 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:02.847 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.105 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.363 [ 00:14:03.363 { 00:14:03.363 "name": "BaseBdev1", 00:14:03.363 "aliases": [ 00:14:03.363 "15818de5-696e-4525-be11-8fa2a811712c" 00:14:03.363 ], 00:14:03.363 "product_name": "Malloc disk", 00:14:03.363 "block_size": 512, 00:14:03.363 "num_blocks": 65536, 00:14:03.363 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:03.363 "assigned_rate_limits": { 00:14:03.363 "rw_ios_per_sec": 0, 00:14:03.363 "rw_mbytes_per_sec": 0, 00:14:03.363 "r_mbytes_per_sec": 0, 00:14:03.363 "w_mbytes_per_sec": 0 00:14:03.363 }, 00:14:03.363 "claimed": true, 00:14:03.363 "claim_type": "exclusive_write", 00:14:03.363 "zoned": false, 00:14:03.363 "supported_io_types": { 00:14:03.363 "read": true, 00:14:03.363 "write": true, 00:14:03.363 "unmap": true, 00:14:03.363 "write_zeroes": true, 00:14:03.363 "flush": true, 00:14:03.363 "reset": true, 00:14:03.363 "compare": false, 00:14:03.363 "compare_and_write": false, 00:14:03.363 "abort": true, 00:14:03.363 "nvme_admin": false, 00:14:03.363 "nvme_io": false 00:14:03.363 }, 00:14:03.363 "memory_domains": [ 00:14:03.363 { 00:14:03.363 "dma_device_id": "system", 00:14:03.363 "dma_device_type": 1 00:14:03.363 }, 00:14:03.363 { 00:14:03.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.363 "dma_device_type": 2 00:14:03.363 } 00:14:03.363 ], 00:14:03.363 "driver_specific": {} 00:14:03.363 } 00:14:03.363 ] 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.363 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:03.363 "name": "Existed_Raid", 00:14:03.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.363 "strip_size_kb": 0, 00:14:03.363 "state": "configuring", 00:14:03.363 "raid_level": "raid1", 00:14:03.363 "superblock": false, 00:14:03.363 "num_base_bdevs": 3, 00:14:03.363 "num_base_bdevs_discovered": 1, 00:14:03.363 "num_base_bdevs_operational": 3, 00:14:03.363 "base_bdevs_list": [ 00:14:03.363 { 00:14:03.363 "name": "BaseBdev1", 00:14:03.363 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:03.363 "is_configured": true, 00:14:03.363 "data_offset": 0, 00:14:03.363 "data_size": 65536 00:14:03.363 }, 00:14:03.363 { 00:14:03.363 "name": "BaseBdev2", 00:14:03.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.363 "is_configured": false, 00:14:03.363 "data_offset": 0, 00:14:03.363 "data_size": 0 00:14:03.363 }, 00:14:03.363 { 00:14:03.363 "name": "BaseBdev3", 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.364 "is_configured": false, 00:14:03.364 "data_offset": 0, 00:14:03.364 "data_size": 0 00:14:03.364 } 00:14:03.364 ] 00:14:03.364 }' 00:14:03.364 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:03.364 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.929 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:04.186 [2024-05-15 03:08:35.310097] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.186 [2024-05-15 03:08:35.310135] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c606b0 name Existed_Raid, state configuring 00:14:04.186 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:04.444 [2024-05-15 03:08:35.550768] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.444 [2024-05-15 03:08:35.552284] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.444 [2024-05-15 03:08:35.552315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.444 [2024-05-15 03:08:35.552323] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.444 [2024-05-15 03:08:35.552333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.444 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.702 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:04.702 "name": "Existed_Raid", 00:14:04.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.702 "strip_size_kb": 0, 00:14:04.702 "state": "configuring", 00:14:04.702 "raid_level": "raid1", 00:14:04.702 "superblock": false, 00:14:04.702 "num_base_bdevs": 3, 00:14:04.702 "num_base_bdevs_discovered": 1, 00:14:04.702 "num_base_bdevs_operational": 3, 00:14:04.702 "base_bdevs_list": [ 00:14:04.702 { 00:14:04.702 "name": "BaseBdev1", 00:14:04.702 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:04.702 "is_configured": true, 00:14:04.702 "data_offset": 0, 00:14:04.702 "data_size": 65536 00:14:04.702 }, 00:14:04.702 { 00:14:04.702 "name": "BaseBdev2", 00:14:04.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.702 "is_configured": false, 00:14:04.702 "data_offset": 0, 00:14:04.702 "data_size": 0 00:14:04.702 }, 00:14:04.702 { 00:14:04.702 "name": "BaseBdev3", 00:14:04.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.702 "is_configured": false, 00:14:04.702 "data_offset": 0, 00:14:04.702 "data_size": 0 00:14:04.702 } 00:14:04.702 ] 00:14:04.702 }' 00:14:04.702 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:04.702 03:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.633 [2024-05-15 03:08:36.673076] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.633 BaseBdev2 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:05.633 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.890 03:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:06.147 [ 00:14:06.147 { 00:14:06.147 "name": "BaseBdev2", 00:14:06.147 "aliases": [ 00:14:06.147 "8e4df38e-3e36-40e1-a97b-2068dff28e7a" 00:14:06.147 ], 00:14:06.147 "product_name": "Malloc disk", 00:14:06.147 "block_size": 512, 00:14:06.147 "num_blocks": 65536, 00:14:06.147 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:06.147 "assigned_rate_limits": { 00:14:06.147 "rw_ios_per_sec": 0, 00:14:06.147 "rw_mbytes_per_sec": 0, 00:14:06.147 "r_mbytes_per_sec": 0, 00:14:06.147 "w_mbytes_per_sec": 0 00:14:06.147 }, 00:14:06.147 "claimed": true, 00:14:06.147 "claim_type": "exclusive_write", 00:14:06.147 "zoned": false, 00:14:06.147 "supported_io_types": { 00:14:06.147 "read": true, 00:14:06.147 "write": true, 00:14:06.147 "unmap": true, 00:14:06.147 "write_zeroes": true, 00:14:06.147 "flush": true, 00:14:06.147 "reset": true, 00:14:06.147 "compare": false, 00:14:06.147 "compare_and_write": false, 00:14:06.147 "abort": true, 00:14:06.147 "nvme_admin": false, 00:14:06.147 "nvme_io": false 00:14:06.147 }, 00:14:06.147 "memory_domains": [ 00:14:06.147 { 00:14:06.147 "dma_device_id": "system", 00:14:06.147 "dma_device_type": 1 00:14:06.147 }, 00:14:06.147 { 00:14:06.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.147 "dma_device_type": 2 00:14:06.147 } 00:14:06.147 ], 00:14:06.147 "driver_specific": {} 00:14:06.147 } 00:14:06.147 ] 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.147 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.404 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.404 "name": "Existed_Raid", 00:14:06.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.404 "strip_size_kb": 0, 00:14:06.404 "state": "configuring", 00:14:06.404 "raid_level": "raid1", 00:14:06.404 "superblock": false, 00:14:06.404 "num_base_bdevs": 3, 00:14:06.404 "num_base_bdevs_discovered": 2, 00:14:06.404 "num_base_bdevs_operational": 3, 00:14:06.404 "base_bdevs_list": [ 00:14:06.404 { 00:14:06.404 "name": "BaseBdev1", 00:14:06.404 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:06.404 "is_configured": true, 00:14:06.404 "data_offset": 0, 00:14:06.404 "data_size": 65536 00:14:06.404 }, 00:14:06.404 { 00:14:06.404 "name": "BaseBdev2", 00:14:06.404 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:06.404 "is_configured": true, 00:14:06.404 "data_offset": 0, 00:14:06.404 "data_size": 65536 00:14:06.404 }, 00:14:06.404 { 00:14:06.404 "name": "BaseBdev3", 00:14:06.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.404 "is_configured": false, 00:14:06.404 "data_offset": 0, 00:14:06.404 "data_size": 0 00:14:06.404 } 00:14:06.404 ] 00:14:06.404 }' 00:14:06.404 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.404 03:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.998 03:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.257 [2024-05-15 03:08:38.240609] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.257 [2024-05-15 03:08:38.240646] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c61760 00:14:07.257 [2024-05-15 03:08:38.240653] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:07.257 [2024-05-15 03:08:38.240862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c78690 00:14:07.257 [2024-05-15 03:08:38.240996] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c61760 00:14:07.257 [2024-05-15 03:08:38.241005] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c61760 00:14:07.257 [2024-05-15 03:08:38.241170] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.257 BaseBdev3 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:07.257 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.515 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.772 [ 00:14:07.772 { 00:14:07.772 "name": "BaseBdev3", 00:14:07.772 "aliases": [ 00:14:07.772 "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20" 00:14:07.772 ], 00:14:07.772 "product_name": "Malloc disk", 00:14:07.772 "block_size": 512, 00:14:07.772 "num_blocks": 65536, 00:14:07.772 "uuid": "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20", 00:14:07.772 "assigned_rate_limits": { 00:14:07.772 "rw_ios_per_sec": 0, 00:14:07.772 "rw_mbytes_per_sec": 0, 00:14:07.772 "r_mbytes_per_sec": 0, 00:14:07.772 "w_mbytes_per_sec": 0 00:14:07.772 }, 00:14:07.772 "claimed": true, 00:14:07.772 "claim_type": "exclusive_write", 00:14:07.772 "zoned": false, 00:14:07.772 "supported_io_types": { 00:14:07.772 "read": true, 00:14:07.772 "write": true, 00:14:07.772 "unmap": true, 00:14:07.772 "write_zeroes": true, 00:14:07.772 "flush": true, 00:14:07.772 "reset": true, 00:14:07.772 "compare": false, 00:14:07.772 "compare_and_write": false, 00:14:07.772 "abort": true, 00:14:07.772 "nvme_admin": false, 00:14:07.772 "nvme_io": false 00:14:07.772 }, 00:14:07.772 "memory_domains": [ 00:14:07.772 { 00:14:07.772 "dma_device_id": "system", 00:14:07.772 "dma_device_type": 1 00:14:07.772 }, 00:14:07.772 { 00:14:07.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.772 "dma_device_type": 2 00:14:07.772 } 00:14:07.772 ], 00:14:07.772 "driver_specific": {} 00:14:07.772 } 00:14:07.772 ] 00:14:07.772 03:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:07.772 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.773 03:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.030 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.030 "name": "Existed_Raid", 00:14:08.030 "uuid": "eb02e27d-88c5-4040-a037-9b4aaa949f63", 00:14:08.030 "strip_size_kb": 0, 00:14:08.030 "state": "online", 00:14:08.030 "raid_level": "raid1", 00:14:08.030 "superblock": false, 00:14:08.030 "num_base_bdevs": 3, 00:14:08.030 "num_base_bdevs_discovered": 3, 00:14:08.030 "num_base_bdevs_operational": 3, 00:14:08.030 "base_bdevs_list": [ 00:14:08.030 { 00:14:08.030 "name": "BaseBdev1", 00:14:08.030 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:08.030 "is_configured": true, 00:14:08.030 "data_offset": 0, 00:14:08.030 "data_size": 65536 00:14:08.030 }, 00:14:08.030 { 00:14:08.030 "name": "BaseBdev2", 00:14:08.030 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:08.030 "is_configured": true, 00:14:08.030 "data_offset": 0, 00:14:08.030 "data_size": 65536 00:14:08.030 }, 00:14:08.030 { 00:14:08.030 "name": "BaseBdev3", 00:14:08.030 "uuid": "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20", 00:14:08.030 "is_configured": true, 00:14:08.030 "data_offset": 0, 00:14:08.030 "data_size": 65536 00:14:08.030 } 00:14:08.030 ] 00:14:08.030 }' 00:14:08.030 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.030 03:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:08.597 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:08.855 [2024-05-15 03:08:39.877281] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.855 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:08.855 "name": "Existed_Raid", 00:14:08.855 "aliases": [ 00:14:08.855 "eb02e27d-88c5-4040-a037-9b4aaa949f63" 00:14:08.855 ], 00:14:08.855 "product_name": "Raid Volume", 00:14:08.855 "block_size": 512, 00:14:08.855 "num_blocks": 65536, 00:14:08.855 "uuid": "eb02e27d-88c5-4040-a037-9b4aaa949f63", 00:14:08.855 "assigned_rate_limits": { 00:14:08.855 "rw_ios_per_sec": 0, 00:14:08.855 "rw_mbytes_per_sec": 0, 00:14:08.855 "r_mbytes_per_sec": 0, 00:14:08.855 "w_mbytes_per_sec": 0 00:14:08.855 }, 00:14:08.855 "claimed": false, 00:14:08.855 "zoned": false, 00:14:08.855 "supported_io_types": { 00:14:08.855 "read": true, 00:14:08.855 "write": true, 00:14:08.855 "unmap": false, 00:14:08.855 "write_zeroes": true, 00:14:08.855 "flush": false, 00:14:08.855 "reset": true, 00:14:08.855 "compare": false, 00:14:08.855 "compare_and_write": false, 00:14:08.855 "abort": false, 00:14:08.855 "nvme_admin": false, 00:14:08.855 "nvme_io": false 00:14:08.855 }, 00:14:08.855 "memory_domains": [ 00:14:08.855 { 00:14:08.855 "dma_device_id": "system", 00:14:08.855 "dma_device_type": 1 00:14:08.855 }, 00:14:08.855 { 00:14:08.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.855 "dma_device_type": 2 00:14:08.855 }, 00:14:08.855 { 00:14:08.855 "dma_device_id": "system", 00:14:08.855 "dma_device_type": 1 00:14:08.855 }, 00:14:08.855 { 00:14:08.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.855 "dma_device_type": 2 00:14:08.855 }, 00:14:08.855 { 00:14:08.855 "dma_device_id": "system", 00:14:08.855 "dma_device_type": 1 00:14:08.855 }, 00:14:08.855 { 00:14:08.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.855 "dma_device_type": 2 00:14:08.855 } 00:14:08.856 ], 00:14:08.856 "driver_specific": { 00:14:08.856 "raid": { 00:14:08.856 "uuid": "eb02e27d-88c5-4040-a037-9b4aaa949f63", 00:14:08.856 "strip_size_kb": 0, 00:14:08.856 "state": "online", 00:14:08.856 "raid_level": "raid1", 00:14:08.856 "superblock": false, 00:14:08.856 "num_base_bdevs": 3, 00:14:08.856 "num_base_bdevs_discovered": 3, 00:14:08.856 "num_base_bdevs_operational": 3, 00:14:08.856 "base_bdevs_list": [ 00:14:08.856 { 00:14:08.856 "name": "BaseBdev1", 00:14:08.856 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:08.856 "is_configured": true, 00:14:08.856 "data_offset": 0, 00:14:08.856 "data_size": 65536 00:14:08.856 }, 00:14:08.856 { 00:14:08.856 "name": "BaseBdev2", 00:14:08.856 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:08.856 "is_configured": true, 00:14:08.856 "data_offset": 0, 00:14:08.856 "data_size": 65536 00:14:08.856 }, 00:14:08.856 { 00:14:08.856 "name": "BaseBdev3", 00:14:08.856 "uuid": "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20", 00:14:08.856 "is_configured": true, 00:14:08.856 "data_offset": 0, 00:14:08.856 "data_size": 65536 00:14:08.856 } 00:14:08.856 ] 00:14:08.856 } 00:14:08.856 } 00:14:08.856 }' 00:14:08.856 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.856 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:08.856 BaseBdev2 00:14:08.856 BaseBdev3' 00:14:08.856 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:08.856 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:08.856 03:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:09.113 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:09.113 "name": "BaseBdev1", 00:14:09.113 "aliases": [ 00:14:09.113 "15818de5-696e-4525-be11-8fa2a811712c" 00:14:09.113 ], 00:14:09.113 "product_name": "Malloc disk", 00:14:09.113 "block_size": 512, 00:14:09.113 "num_blocks": 65536, 00:14:09.113 "uuid": "15818de5-696e-4525-be11-8fa2a811712c", 00:14:09.113 "assigned_rate_limits": { 00:14:09.113 "rw_ios_per_sec": 0, 00:14:09.113 "rw_mbytes_per_sec": 0, 00:14:09.113 "r_mbytes_per_sec": 0, 00:14:09.113 "w_mbytes_per_sec": 0 00:14:09.113 }, 00:14:09.113 "claimed": true, 00:14:09.113 "claim_type": "exclusive_write", 00:14:09.113 "zoned": false, 00:14:09.113 "supported_io_types": { 00:14:09.113 "read": true, 00:14:09.113 "write": true, 00:14:09.113 "unmap": true, 00:14:09.113 "write_zeroes": true, 00:14:09.113 "flush": true, 00:14:09.113 "reset": true, 00:14:09.113 "compare": false, 00:14:09.113 "compare_and_write": false, 00:14:09.113 "abort": true, 00:14:09.113 "nvme_admin": false, 00:14:09.113 "nvme_io": false 00:14:09.113 }, 00:14:09.113 "memory_domains": [ 00:14:09.113 { 00:14:09.113 "dma_device_id": "system", 00:14:09.113 "dma_device_type": 1 00:14:09.113 }, 00:14:09.113 { 00:14:09.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.113 "dma_device_type": 2 00:14:09.113 } 00:14:09.113 ], 00:14:09.113 "driver_specific": {} 00:14:09.113 }' 00:14:09.113 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.113 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.370 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.628 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.628 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:09.628 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:09.628 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:09.628 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:09.887 "name": "BaseBdev2", 00:14:09.887 "aliases": [ 00:14:09.887 "8e4df38e-3e36-40e1-a97b-2068dff28e7a" 00:14:09.887 ], 00:14:09.887 "product_name": "Malloc disk", 00:14:09.887 "block_size": 512, 00:14:09.887 "num_blocks": 65536, 00:14:09.887 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:09.887 "assigned_rate_limits": { 00:14:09.887 "rw_ios_per_sec": 0, 00:14:09.887 "rw_mbytes_per_sec": 0, 00:14:09.887 "r_mbytes_per_sec": 0, 00:14:09.887 "w_mbytes_per_sec": 0 00:14:09.887 }, 00:14:09.887 "claimed": true, 00:14:09.887 "claim_type": "exclusive_write", 00:14:09.887 "zoned": false, 00:14:09.887 "supported_io_types": { 00:14:09.887 "read": true, 00:14:09.887 "write": true, 00:14:09.887 "unmap": true, 00:14:09.887 "write_zeroes": true, 00:14:09.887 "flush": true, 00:14:09.887 "reset": true, 00:14:09.887 "compare": false, 00:14:09.887 "compare_and_write": false, 00:14:09.887 "abort": true, 00:14:09.887 "nvme_admin": false, 00:14:09.887 "nvme_io": false 00:14:09.887 }, 00:14:09.887 "memory_domains": [ 00:14:09.887 { 00:14:09.887 "dma_device_id": "system", 00:14:09.887 "dma_device_type": 1 00:14:09.887 }, 00:14:09.887 { 00:14:09.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.887 "dma_device_type": 2 00:14:09.887 } 00:14:09.887 ], 00:14:09.887 "driver_specific": {} 00:14:09.887 }' 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.887 03:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.887 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.887 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:10.144 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:10.402 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:10.402 "name": "BaseBdev3", 00:14:10.402 "aliases": [ 00:14:10.402 "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20" 00:14:10.402 ], 00:14:10.402 "product_name": "Malloc disk", 00:14:10.402 "block_size": 512, 00:14:10.402 "num_blocks": 65536, 00:14:10.402 "uuid": "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20", 00:14:10.402 "assigned_rate_limits": { 00:14:10.402 "rw_ios_per_sec": 0, 00:14:10.402 "rw_mbytes_per_sec": 0, 00:14:10.402 "r_mbytes_per_sec": 0, 00:14:10.402 "w_mbytes_per_sec": 0 00:14:10.402 }, 00:14:10.402 "claimed": true, 00:14:10.402 "claim_type": "exclusive_write", 00:14:10.402 "zoned": false, 00:14:10.402 "supported_io_types": { 00:14:10.402 "read": true, 00:14:10.402 "write": true, 00:14:10.402 "unmap": true, 00:14:10.402 "write_zeroes": true, 00:14:10.402 "flush": true, 00:14:10.402 "reset": true, 00:14:10.402 "compare": false, 00:14:10.402 "compare_and_write": false, 00:14:10.402 "abort": true, 00:14:10.402 "nvme_admin": false, 00:14:10.402 "nvme_io": false 00:14:10.402 }, 00:14:10.402 "memory_domains": [ 00:14:10.402 { 00:14:10.402 "dma_device_id": "system", 00:14:10.402 "dma_device_type": 1 00:14:10.402 }, 00:14:10.402 { 00:14:10.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.402 "dma_device_type": 2 00:14:10.402 } 00:14:10.402 ], 00:14:10.402 "driver_specific": {} 00:14:10.402 }' 00:14:10.402 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:10.402 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:10.659 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:10.917 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:10.917 03:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:11.175 [2024-05-15 03:08:42.078958] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.175 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.432 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.432 "name": "Existed_Raid", 00:14:11.432 "uuid": "eb02e27d-88c5-4040-a037-9b4aaa949f63", 00:14:11.432 "strip_size_kb": 0, 00:14:11.432 "state": "online", 00:14:11.432 "raid_level": "raid1", 00:14:11.432 "superblock": false, 00:14:11.432 "num_base_bdevs": 3, 00:14:11.432 "num_base_bdevs_discovered": 2, 00:14:11.432 "num_base_bdevs_operational": 2, 00:14:11.432 "base_bdevs_list": [ 00:14:11.432 { 00:14:11.432 "name": null, 00:14:11.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.432 "is_configured": false, 00:14:11.432 "data_offset": 0, 00:14:11.432 "data_size": 65536 00:14:11.432 }, 00:14:11.432 { 00:14:11.432 "name": "BaseBdev2", 00:14:11.432 "uuid": "8e4df38e-3e36-40e1-a97b-2068dff28e7a", 00:14:11.432 "is_configured": true, 00:14:11.432 "data_offset": 0, 00:14:11.432 "data_size": 65536 00:14:11.432 }, 00:14:11.432 { 00:14:11.432 "name": "BaseBdev3", 00:14:11.432 "uuid": "2e91251f-8b3d-4ec8-87f0-7e30b9a72c20", 00:14:11.432 "is_configured": true, 00:14:11.432 "data_offset": 0, 00:14:11.432 "data_size": 65536 00:14:11.432 } 00:14:11.432 ] 00:14:11.432 }' 00:14:11.432 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.432 03:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.998 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:11.998 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:11.998 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.998 03:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:12.255 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:12.255 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.255 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:12.512 [2024-05-15 03:08:43.471829] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.512 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.512 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.512 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.512 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:12.768 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:12.768 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.768 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:12.768 [2024-05-15 03:08:43.907385] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.768 [2024-05-15 03:08:43.907448] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.768 [2024-05-15 03:08:43.917923] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.768 [2024-05-15 03:08:43.917978] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.768 [2024-05-15 03:08:43.917988] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c61760 name Existed_Raid, state offline 00:14:13.025 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:13.025 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:13.025 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.025 03:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:13.283 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:13.283 BaseBdev2 00:14:13.540 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:13.541 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.799 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.799 [ 00:14:13.799 { 00:14:13.799 "name": "BaseBdev2", 00:14:13.799 "aliases": [ 00:14:13.799 "91bc8b33-4954-4a1e-91f2-bdb7d5d64642" 00:14:13.799 ], 00:14:13.799 "product_name": "Malloc disk", 00:14:13.799 "block_size": 512, 00:14:13.799 "num_blocks": 65536, 00:14:13.799 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:13.799 "assigned_rate_limits": { 00:14:13.799 "rw_ios_per_sec": 0, 00:14:13.799 "rw_mbytes_per_sec": 0, 00:14:13.799 "r_mbytes_per_sec": 0, 00:14:13.799 "w_mbytes_per_sec": 0 00:14:13.799 }, 00:14:13.799 "claimed": false, 00:14:13.799 "zoned": false, 00:14:13.799 "supported_io_types": { 00:14:13.799 "read": true, 00:14:13.799 "write": true, 00:14:13.799 "unmap": true, 00:14:13.799 "write_zeroes": true, 00:14:13.799 "flush": true, 00:14:13.799 "reset": true, 00:14:13.799 "compare": false, 00:14:13.799 "compare_and_write": false, 00:14:13.799 "abort": true, 00:14:13.799 "nvme_admin": false, 00:14:13.799 "nvme_io": false 00:14:13.799 }, 00:14:13.799 "memory_domains": [ 00:14:13.799 { 00:14:13.799 "dma_device_id": "system", 00:14:13.799 "dma_device_type": 1 00:14:13.799 }, 00:14:13.799 { 00:14:13.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.799 "dma_device_type": 2 00:14:13.799 } 00:14:13.799 ], 00:14:13.799 "driver_specific": {} 00:14:13.799 } 00:14:13.799 ] 00:14:13.799 03:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:13.799 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:13.799 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:13.799 03:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:14.057 BaseBdev3 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.314 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.572 [ 00:14:14.572 { 00:14:14.572 "name": "BaseBdev3", 00:14:14.572 "aliases": [ 00:14:14.572 "41f4aa3b-858a-4daf-916f-9f7bbf32c657" 00:14:14.572 ], 00:14:14.572 "product_name": "Malloc disk", 00:14:14.572 "block_size": 512, 00:14:14.572 "num_blocks": 65536, 00:14:14.572 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:14.572 "assigned_rate_limits": { 00:14:14.572 "rw_ios_per_sec": 0, 00:14:14.572 "rw_mbytes_per_sec": 0, 00:14:14.572 "r_mbytes_per_sec": 0, 00:14:14.572 "w_mbytes_per_sec": 0 00:14:14.572 }, 00:14:14.572 "claimed": false, 00:14:14.572 "zoned": false, 00:14:14.572 "supported_io_types": { 00:14:14.572 "read": true, 00:14:14.572 "write": true, 00:14:14.572 "unmap": true, 00:14:14.572 "write_zeroes": true, 00:14:14.572 "flush": true, 00:14:14.572 "reset": true, 00:14:14.572 "compare": false, 00:14:14.572 "compare_and_write": false, 00:14:14.572 "abort": true, 00:14:14.572 "nvme_admin": false, 00:14:14.572 "nvme_io": false 00:14:14.572 }, 00:14:14.572 "memory_domains": [ 00:14:14.572 { 00:14:14.572 "dma_device_id": "system", 00:14:14.572 "dma_device_type": 1 00:14:14.572 }, 00:14:14.572 { 00:14:14.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.572 "dma_device_type": 2 00:14:14.572 } 00:14:14.572 ], 00:14:14.572 "driver_specific": {} 00:14:14.572 } 00:14:14.572 ] 00:14:14.572 03:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:14.572 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:14.572 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:14.572 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:14.829 [2024-05-15 03:08:45.899861] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.829 [2024-05-15 03:08:45.899899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.829 [2024-05-15 03:08:45.899916] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.829 [2024-05-15 03:08:45.901275] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.829 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:14.829 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.830 03:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.087 03:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.087 "name": "Existed_Raid", 00:14:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.087 "strip_size_kb": 0, 00:14:15.087 "state": "configuring", 00:14:15.087 "raid_level": "raid1", 00:14:15.087 "superblock": false, 00:14:15.087 "num_base_bdevs": 3, 00:14:15.087 "num_base_bdevs_discovered": 2, 00:14:15.087 "num_base_bdevs_operational": 3, 00:14:15.087 "base_bdevs_list": [ 00:14:15.087 { 00:14:15.087 "name": "BaseBdev1", 00:14:15.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.087 "is_configured": false, 00:14:15.087 "data_offset": 0, 00:14:15.087 "data_size": 0 00:14:15.087 }, 00:14:15.087 { 00:14:15.087 "name": "BaseBdev2", 00:14:15.087 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:15.087 "is_configured": true, 00:14:15.087 "data_offset": 0, 00:14:15.087 "data_size": 65536 00:14:15.087 }, 00:14:15.087 { 00:14:15.087 "name": "BaseBdev3", 00:14:15.087 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:15.087 "is_configured": true, 00:14:15.087 "data_offset": 0, 00:14:15.087 "data_size": 65536 00:14:15.087 } 00:14:15.087 ] 00:14:15.087 }' 00:14:15.087 03:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.087 03:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.020 03:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:16.020 [2024-05-15 03:08:47.050927] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.020 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.278 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.278 "name": "Existed_Raid", 00:14:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.278 "strip_size_kb": 0, 00:14:16.278 "state": "configuring", 00:14:16.278 "raid_level": "raid1", 00:14:16.278 "superblock": false, 00:14:16.278 "num_base_bdevs": 3, 00:14:16.278 "num_base_bdevs_discovered": 1, 00:14:16.278 "num_base_bdevs_operational": 3, 00:14:16.278 "base_bdevs_list": [ 00:14:16.278 { 00:14:16.278 "name": "BaseBdev1", 00:14:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.278 "is_configured": false, 00:14:16.278 "data_offset": 0, 00:14:16.278 "data_size": 0 00:14:16.278 }, 00:14:16.278 { 00:14:16.278 "name": null, 00:14:16.278 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:16.278 "is_configured": false, 00:14:16.278 "data_offset": 0, 00:14:16.278 "data_size": 65536 00:14:16.278 }, 00:14:16.278 { 00:14:16.278 "name": "BaseBdev3", 00:14:16.278 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:16.278 "is_configured": true, 00:14:16.278 "data_offset": 0, 00:14:16.278 "data_size": 65536 00:14:16.278 } 00:14:16.278 ] 00:14:16.278 }' 00:14:16.278 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.278 03:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.844 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.844 03:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.101 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:17.101 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.359 [2024-05-15 03:08:48.462137] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.359 BaseBdev1 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:17.359 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.617 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.875 [ 00:14:17.875 { 00:14:17.875 "name": "BaseBdev1", 00:14:17.875 "aliases": [ 00:14:17.875 "9157c468-556a-457a-9c56-637a76222b5e" 00:14:17.875 ], 00:14:17.875 "product_name": "Malloc disk", 00:14:17.875 "block_size": 512, 00:14:17.875 "num_blocks": 65536, 00:14:17.875 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:17.875 "assigned_rate_limits": { 00:14:17.875 "rw_ios_per_sec": 0, 00:14:17.875 "rw_mbytes_per_sec": 0, 00:14:17.875 "r_mbytes_per_sec": 0, 00:14:17.875 "w_mbytes_per_sec": 0 00:14:17.875 }, 00:14:17.875 "claimed": true, 00:14:17.875 "claim_type": "exclusive_write", 00:14:17.875 "zoned": false, 00:14:17.875 "supported_io_types": { 00:14:17.875 "read": true, 00:14:17.875 "write": true, 00:14:17.875 "unmap": true, 00:14:17.875 "write_zeroes": true, 00:14:17.875 "flush": true, 00:14:17.875 "reset": true, 00:14:17.875 "compare": false, 00:14:17.875 "compare_and_write": false, 00:14:17.875 "abort": true, 00:14:17.875 "nvme_admin": false, 00:14:17.875 "nvme_io": false 00:14:17.875 }, 00:14:17.875 "memory_domains": [ 00:14:17.875 { 00:14:17.875 "dma_device_id": "system", 00:14:17.875 "dma_device_type": 1 00:14:17.875 }, 00:14:17.875 { 00:14:17.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.875 "dma_device_type": 2 00:14:17.875 } 00:14:17.875 ], 00:14:17.875 "driver_specific": {} 00:14:17.875 } 00:14:17.875 ] 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.875 03:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.133 03:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.133 "name": "Existed_Raid", 00:14:18.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.133 "strip_size_kb": 0, 00:14:18.133 "state": "configuring", 00:14:18.133 "raid_level": "raid1", 00:14:18.133 "superblock": false, 00:14:18.133 "num_base_bdevs": 3, 00:14:18.133 "num_base_bdevs_discovered": 2, 00:14:18.133 "num_base_bdevs_operational": 3, 00:14:18.133 "base_bdevs_list": [ 00:14:18.133 { 00:14:18.133 "name": "BaseBdev1", 00:14:18.133 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:18.133 "is_configured": true, 00:14:18.133 "data_offset": 0, 00:14:18.133 "data_size": 65536 00:14:18.133 }, 00:14:18.133 { 00:14:18.133 "name": null, 00:14:18.133 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:18.133 "is_configured": false, 00:14:18.133 "data_offset": 0, 00:14:18.133 "data_size": 65536 00:14:18.133 }, 00:14:18.133 { 00:14:18.133 "name": "BaseBdev3", 00:14:18.133 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:18.133 "is_configured": true, 00:14:18.133 "data_offset": 0, 00:14:18.133 "data_size": 65536 00:14:18.133 } 00:14:18.133 ] 00:14:18.133 }' 00:14:18.133 03:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.133 03:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.066 03:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.066 03:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.066 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:19.066 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:19.324 [2024-05-15 03:08:50.367274] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.324 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.582 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.582 "name": "Existed_Raid", 00:14:19.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.582 "strip_size_kb": 0, 00:14:19.582 "state": "configuring", 00:14:19.582 "raid_level": "raid1", 00:14:19.582 "superblock": false, 00:14:19.582 "num_base_bdevs": 3, 00:14:19.582 "num_base_bdevs_discovered": 1, 00:14:19.582 "num_base_bdevs_operational": 3, 00:14:19.582 "base_bdevs_list": [ 00:14:19.582 { 00:14:19.582 "name": "BaseBdev1", 00:14:19.582 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:19.582 "is_configured": true, 00:14:19.582 "data_offset": 0, 00:14:19.582 "data_size": 65536 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "name": null, 00:14:19.582 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:19.582 "is_configured": false, 00:14:19.582 "data_offset": 0, 00:14:19.582 "data_size": 65536 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "name": null, 00:14:19.582 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:19.582 "is_configured": false, 00:14:19.582 "data_offset": 0, 00:14:19.582 "data_size": 65536 00:14:19.582 } 00:14:19.582 ] 00:14:19.582 }' 00:14:19.582 03:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.582 03:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.147 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.147 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:20.404 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:20.404 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:20.662 [2024-05-15 03:08:51.742988] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.662 03:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.920 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.920 "name": "Existed_Raid", 00:14:20.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.920 "strip_size_kb": 0, 00:14:20.920 "state": "configuring", 00:14:20.920 "raid_level": "raid1", 00:14:20.920 "superblock": false, 00:14:20.920 "num_base_bdevs": 3, 00:14:20.920 "num_base_bdevs_discovered": 2, 00:14:20.920 "num_base_bdevs_operational": 3, 00:14:20.920 "base_bdevs_list": [ 00:14:20.920 { 00:14:20.920 "name": "BaseBdev1", 00:14:20.920 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:20.920 "is_configured": true, 00:14:20.920 "data_offset": 0, 00:14:20.920 "data_size": 65536 00:14:20.920 }, 00:14:20.920 { 00:14:20.920 "name": null, 00:14:20.920 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:20.920 "is_configured": false, 00:14:20.920 "data_offset": 0, 00:14:20.920 "data_size": 65536 00:14:20.920 }, 00:14:20.920 { 00:14:20.920 "name": "BaseBdev3", 00:14:20.920 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:20.920 "is_configured": true, 00:14:20.920 "data_offset": 0, 00:14:20.920 "data_size": 65536 00:14:20.920 } 00:14:20.920 ] 00:14:20.920 }' 00:14:20.920 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.920 03:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.485 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.485 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.744 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:21.744 03:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:22.003 [2024-05-15 03:08:53.114841] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.003 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.261 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.261 "name": "Existed_Raid", 00:14:22.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.261 "strip_size_kb": 0, 00:14:22.261 "state": "configuring", 00:14:22.261 "raid_level": "raid1", 00:14:22.261 "superblock": false, 00:14:22.261 "num_base_bdevs": 3, 00:14:22.261 "num_base_bdevs_discovered": 1, 00:14:22.261 "num_base_bdevs_operational": 3, 00:14:22.261 "base_bdevs_list": [ 00:14:22.261 { 00:14:22.261 "name": null, 00:14:22.261 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:22.261 "is_configured": false, 00:14:22.261 "data_offset": 0, 00:14:22.261 "data_size": 65536 00:14:22.261 }, 00:14:22.261 { 00:14:22.261 "name": null, 00:14:22.261 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:22.261 "is_configured": false, 00:14:22.261 "data_offset": 0, 00:14:22.261 "data_size": 65536 00:14:22.261 }, 00:14:22.261 { 00:14:22.261 "name": "BaseBdev3", 00:14:22.261 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:22.261 "is_configured": true, 00:14:22.261 "data_offset": 0, 00:14:22.261 "data_size": 65536 00:14:22.261 } 00:14:22.261 ] 00:14:22.261 }' 00:14:22.261 03:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.261 03:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.204 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.204 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.204 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:23.204 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:23.483 [2024-05-15 03:08:54.416866] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.483 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.753 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.753 "name": "Existed_Raid", 00:14:23.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.753 "strip_size_kb": 0, 00:14:23.753 "state": "configuring", 00:14:23.753 "raid_level": "raid1", 00:14:23.753 "superblock": false, 00:14:23.753 "num_base_bdevs": 3, 00:14:23.753 "num_base_bdevs_discovered": 2, 00:14:23.753 "num_base_bdevs_operational": 3, 00:14:23.753 "base_bdevs_list": [ 00:14:23.753 { 00:14:23.753 "name": null, 00:14:23.753 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:23.753 "is_configured": false, 00:14:23.753 "data_offset": 0, 00:14:23.753 "data_size": 65536 00:14:23.753 }, 00:14:23.753 { 00:14:23.753 "name": "BaseBdev2", 00:14:23.753 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:23.753 "is_configured": true, 00:14:23.753 "data_offset": 0, 00:14:23.753 "data_size": 65536 00:14:23.753 }, 00:14:23.753 { 00:14:23.753 "name": "BaseBdev3", 00:14:23.753 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:23.753 "is_configured": true, 00:14:23.753 "data_offset": 0, 00:14:23.753 "data_size": 65536 00:14:23.753 } 00:14:23.753 ] 00:14:23.753 }' 00:14:23.753 03:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.753 03:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.317 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.317 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.573 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:24.573 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.573 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:24.829 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9157c468-556a-457a-9c56-637a76222b5e 00:14:25.087 [2024-05-15 03:08:56.068520] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:25.087 [2024-05-15 03:08:56.068556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e06410 00:14:25.087 [2024-05-15 03:08:56.068563] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:25.087 [2024-05-15 03:08:56.068761] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e09fb0 00:14:25.087 [2024-05-15 03:08:56.068900] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e06410 00:14:25.087 [2024-05-15 03:08:56.068909] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e06410 00:14:25.087 [2024-05-15 03:08:56.069067] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.087 NewBaseBdev 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:25.087 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.344 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:25.601 [ 00:14:25.601 { 00:14:25.601 "name": "NewBaseBdev", 00:14:25.601 "aliases": [ 00:14:25.601 "9157c468-556a-457a-9c56-637a76222b5e" 00:14:25.601 ], 00:14:25.601 "product_name": "Malloc disk", 00:14:25.601 "block_size": 512, 00:14:25.601 "num_blocks": 65536, 00:14:25.601 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:25.601 "assigned_rate_limits": { 00:14:25.601 "rw_ios_per_sec": 0, 00:14:25.601 "rw_mbytes_per_sec": 0, 00:14:25.601 "r_mbytes_per_sec": 0, 00:14:25.601 "w_mbytes_per_sec": 0 00:14:25.601 }, 00:14:25.601 "claimed": true, 00:14:25.601 "claim_type": "exclusive_write", 00:14:25.601 "zoned": false, 00:14:25.601 "supported_io_types": { 00:14:25.601 "read": true, 00:14:25.601 "write": true, 00:14:25.601 "unmap": true, 00:14:25.601 "write_zeroes": true, 00:14:25.601 "flush": true, 00:14:25.601 "reset": true, 00:14:25.601 "compare": false, 00:14:25.601 "compare_and_write": false, 00:14:25.601 "abort": true, 00:14:25.601 "nvme_admin": false, 00:14:25.601 "nvme_io": false 00:14:25.601 }, 00:14:25.601 "memory_domains": [ 00:14:25.601 { 00:14:25.601 "dma_device_id": "system", 00:14:25.601 "dma_device_type": 1 00:14:25.601 }, 00:14:25.601 { 00:14:25.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.601 "dma_device_type": 2 00:14:25.601 } 00:14:25.601 ], 00:14:25.601 "driver_specific": {} 00:14:25.601 } 00:14:25.601 ] 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.601 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.858 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.858 "name": "Existed_Raid", 00:14:25.858 "uuid": "362bafc0-36db-47f9-b508-32a5b507ccf0", 00:14:25.858 "strip_size_kb": 0, 00:14:25.858 "state": "online", 00:14:25.858 "raid_level": "raid1", 00:14:25.858 "superblock": false, 00:14:25.858 "num_base_bdevs": 3, 00:14:25.858 "num_base_bdevs_discovered": 3, 00:14:25.858 "num_base_bdevs_operational": 3, 00:14:25.858 "base_bdevs_list": [ 00:14:25.858 { 00:14:25.858 "name": "NewBaseBdev", 00:14:25.858 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:25.858 "is_configured": true, 00:14:25.858 "data_offset": 0, 00:14:25.858 "data_size": 65536 00:14:25.858 }, 00:14:25.858 { 00:14:25.858 "name": "BaseBdev2", 00:14:25.858 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:25.858 "is_configured": true, 00:14:25.858 "data_offset": 0, 00:14:25.858 "data_size": 65536 00:14:25.858 }, 00:14:25.858 { 00:14:25.858 "name": "BaseBdev3", 00:14:25.858 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:25.858 "is_configured": true, 00:14:25.858 "data_offset": 0, 00:14:25.858 "data_size": 65536 00:14:25.858 } 00:14:25.858 ] 00:14:25.858 }' 00:14:25.858 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.858 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:26.422 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:26.679 [2024-05-15 03:08:57.701185] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:26.679 "name": "Existed_Raid", 00:14:26.679 "aliases": [ 00:14:26.679 "362bafc0-36db-47f9-b508-32a5b507ccf0" 00:14:26.679 ], 00:14:26.679 "product_name": "Raid Volume", 00:14:26.679 "block_size": 512, 00:14:26.679 "num_blocks": 65536, 00:14:26.679 "uuid": "362bafc0-36db-47f9-b508-32a5b507ccf0", 00:14:26.679 "assigned_rate_limits": { 00:14:26.679 "rw_ios_per_sec": 0, 00:14:26.679 "rw_mbytes_per_sec": 0, 00:14:26.679 "r_mbytes_per_sec": 0, 00:14:26.679 "w_mbytes_per_sec": 0 00:14:26.679 }, 00:14:26.679 "claimed": false, 00:14:26.679 "zoned": false, 00:14:26.679 "supported_io_types": { 00:14:26.679 "read": true, 00:14:26.679 "write": true, 00:14:26.679 "unmap": false, 00:14:26.679 "write_zeroes": true, 00:14:26.679 "flush": false, 00:14:26.679 "reset": true, 00:14:26.679 "compare": false, 00:14:26.679 "compare_and_write": false, 00:14:26.679 "abort": false, 00:14:26.679 "nvme_admin": false, 00:14:26.679 "nvme_io": false 00:14:26.679 }, 00:14:26.679 "memory_domains": [ 00:14:26.679 { 00:14:26.679 "dma_device_id": "system", 00:14:26.679 "dma_device_type": 1 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.679 "dma_device_type": 2 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "dma_device_id": "system", 00:14:26.679 "dma_device_type": 1 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.679 "dma_device_type": 2 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "dma_device_id": "system", 00:14:26.679 "dma_device_type": 1 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.679 "dma_device_type": 2 00:14:26.679 } 00:14:26.679 ], 00:14:26.679 "driver_specific": { 00:14:26.679 "raid": { 00:14:26.679 "uuid": "362bafc0-36db-47f9-b508-32a5b507ccf0", 00:14:26.679 "strip_size_kb": 0, 00:14:26.679 "state": "online", 00:14:26.679 "raid_level": "raid1", 00:14:26.679 "superblock": false, 00:14:26.679 "num_base_bdevs": 3, 00:14:26.679 "num_base_bdevs_discovered": 3, 00:14:26.679 "num_base_bdevs_operational": 3, 00:14:26.679 "base_bdevs_list": [ 00:14:26.679 { 00:14:26.679 "name": "NewBaseBdev", 00:14:26.679 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:26.679 "is_configured": true, 00:14:26.679 "data_offset": 0, 00:14:26.679 "data_size": 65536 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "name": "BaseBdev2", 00:14:26.679 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:26.679 "is_configured": true, 00:14:26.679 "data_offset": 0, 00:14:26.679 "data_size": 65536 00:14:26.679 }, 00:14:26.679 { 00:14:26.679 "name": "BaseBdev3", 00:14:26.679 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:26.679 "is_configured": true, 00:14:26.679 "data_offset": 0, 00:14:26.679 "data_size": 65536 00:14:26.679 } 00:14:26.679 ] 00:14:26.679 } 00:14:26.679 } 00:14:26.679 }' 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:26.679 BaseBdev2 00:14:26.679 BaseBdev3' 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:26.679 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:26.939 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:26.939 "name": "NewBaseBdev", 00:14:26.939 "aliases": [ 00:14:26.939 "9157c468-556a-457a-9c56-637a76222b5e" 00:14:26.939 ], 00:14:26.939 "product_name": "Malloc disk", 00:14:26.939 "block_size": 512, 00:14:26.939 "num_blocks": 65536, 00:14:26.939 "uuid": "9157c468-556a-457a-9c56-637a76222b5e", 00:14:26.939 "assigned_rate_limits": { 00:14:26.939 "rw_ios_per_sec": 0, 00:14:26.939 "rw_mbytes_per_sec": 0, 00:14:26.939 "r_mbytes_per_sec": 0, 00:14:26.939 "w_mbytes_per_sec": 0 00:14:26.939 }, 00:14:26.939 "claimed": true, 00:14:26.939 "claim_type": "exclusive_write", 00:14:26.939 "zoned": false, 00:14:26.939 "supported_io_types": { 00:14:26.939 "read": true, 00:14:26.939 "write": true, 00:14:26.939 "unmap": true, 00:14:26.939 "write_zeroes": true, 00:14:26.939 "flush": true, 00:14:26.939 "reset": true, 00:14:26.939 "compare": false, 00:14:26.939 "compare_and_write": false, 00:14:26.939 "abort": true, 00:14:26.939 "nvme_admin": false, 00:14:26.939 "nvme_io": false 00:14:26.939 }, 00:14:26.939 "memory_domains": [ 00:14:26.939 { 00:14:26.939 "dma_device_id": "system", 00:14:26.939 "dma_device_type": 1 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.939 "dma_device_type": 2 00:14:26.939 } 00:14:26.939 ], 00:14:26.939 "driver_specific": {} 00:14:26.939 }' 00:14:26.939 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:26.939 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:27.200 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:27.458 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:27.458 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:27.458 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:27.458 "name": "BaseBdev2", 00:14:27.458 "aliases": [ 00:14:27.458 "91bc8b33-4954-4a1e-91f2-bdb7d5d64642" 00:14:27.458 ], 00:14:27.458 "product_name": "Malloc disk", 00:14:27.458 "block_size": 512, 00:14:27.458 "num_blocks": 65536, 00:14:27.458 "uuid": "91bc8b33-4954-4a1e-91f2-bdb7d5d64642", 00:14:27.458 "assigned_rate_limits": { 00:14:27.458 "rw_ios_per_sec": 0, 00:14:27.458 "rw_mbytes_per_sec": 0, 00:14:27.458 "r_mbytes_per_sec": 0, 00:14:27.458 "w_mbytes_per_sec": 0 00:14:27.458 }, 00:14:27.458 "claimed": true, 00:14:27.458 "claim_type": "exclusive_write", 00:14:27.458 "zoned": false, 00:14:27.458 "supported_io_types": { 00:14:27.458 "read": true, 00:14:27.458 "write": true, 00:14:27.458 "unmap": true, 00:14:27.458 "write_zeroes": true, 00:14:27.458 "flush": true, 00:14:27.458 "reset": true, 00:14:27.458 "compare": false, 00:14:27.458 "compare_and_write": false, 00:14:27.458 "abort": true, 00:14:27.458 "nvme_admin": false, 00:14:27.458 "nvme_io": false 00:14:27.458 }, 00:14:27.458 "memory_domains": [ 00:14:27.458 { 00:14:27.458 "dma_device_id": "system", 00:14:27.458 "dma_device_type": 1 00:14:27.458 }, 00:14:27.458 { 00:14:27.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.458 "dma_device_type": 2 00:14:27.458 } 00:14:27.458 ], 00:14:27.458 "driver_specific": {} 00:14:27.458 }' 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:27.716 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:27.975 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:28.233 "name": "BaseBdev3", 00:14:28.233 "aliases": [ 00:14:28.233 "41f4aa3b-858a-4daf-916f-9f7bbf32c657" 00:14:28.233 ], 00:14:28.233 "product_name": "Malloc disk", 00:14:28.233 "block_size": 512, 00:14:28.233 "num_blocks": 65536, 00:14:28.233 "uuid": "41f4aa3b-858a-4daf-916f-9f7bbf32c657", 00:14:28.233 "assigned_rate_limits": { 00:14:28.233 "rw_ios_per_sec": 0, 00:14:28.233 "rw_mbytes_per_sec": 0, 00:14:28.233 "r_mbytes_per_sec": 0, 00:14:28.233 "w_mbytes_per_sec": 0 00:14:28.233 }, 00:14:28.233 "claimed": true, 00:14:28.233 "claim_type": "exclusive_write", 00:14:28.233 "zoned": false, 00:14:28.233 "supported_io_types": { 00:14:28.233 "read": true, 00:14:28.233 "write": true, 00:14:28.233 "unmap": true, 00:14:28.233 "write_zeroes": true, 00:14:28.233 "flush": true, 00:14:28.233 "reset": true, 00:14:28.233 "compare": false, 00:14:28.233 "compare_and_write": false, 00:14:28.233 "abort": true, 00:14:28.233 "nvme_admin": false, 00:14:28.233 "nvme_io": false 00:14:28.233 }, 00:14:28.233 "memory_domains": [ 00:14:28.233 { 00:14:28.233 "dma_device_id": "system", 00:14:28.233 "dma_device_type": 1 00:14:28.233 }, 00:14:28.233 { 00:14:28.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.233 "dma_device_type": 2 00:14:28.233 } 00:14:28.233 ], 00:14:28.233 "driver_specific": {} 00:14:28.233 }' 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:28.233 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:28.490 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.749 [2024-05-15 03:08:59.818637] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.749 [2024-05-15 03:08:59.818660] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.749 [2024-05-15 03:08:59.818707] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.749 [2024-05-15 03:08:59.818986] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.749 [2024-05-15 03:08:59.818997] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e06410 name Existed_Raid, state offline 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4087220 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4087220 ']' 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4087220 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4087220 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4087220' 00:14:28.749 killing process with pid 4087220 00:14:28.749 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4087220 00:14:28.749 [2024-05-15 03:08:59.883438] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.750 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4087220 00:14:28.750 [2024-05-15 03:08:59.907924] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.007 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:14:29.007 00:14:29.007 real 0m28.831s 00:14:29.007 user 0m54.003s 00:14:29.007 sys 0m4.110s 00:14:29.007 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.007 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.007 ************************************ 00:14:29.007 END TEST raid_state_function_test 00:14:29.007 ************************************ 00:14:29.266 03:09:00 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:29.266 03:09:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:29.266 03:09:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.266 03:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.266 ************************************ 00:14:29.266 START TEST raid_state_function_test_sb 00:14:29.266 ************************************ 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4092601 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4092601' 00:14:29.266 Process raid pid: 4092601 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4092601 /var/tmp/spdk-raid.sock 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4092601 ']' 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:29.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.266 03:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.266 [2024-05-15 03:09:00.270972] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:29.266 [2024-05-15 03:09:00.271024] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.266 [2024-05-15 03:09:00.368360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.525 [2024-05-15 03:09:00.462631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.525 [2024-05-15 03:09:00.520161] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.525 [2024-05-15 03:09:00.520193] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.092 03:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.092 03:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:30.092 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:30.351 [2024-05-15 03:09:01.459021] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.351 [2024-05-15 03:09:01.459060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.351 [2024-05-15 03:09:01.459069] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.351 [2024-05-15 03:09:01.459078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.351 [2024-05-15 03:09:01.459085] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.351 [2024-05-15 03:09:01.459093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.351 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.609 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.609 "name": "Existed_Raid", 00:14:30.609 "uuid": "38c9b723-3978-425c-b532-eb08ad85cb73", 00:14:30.609 "strip_size_kb": 0, 00:14:30.609 "state": "configuring", 00:14:30.609 "raid_level": "raid1", 00:14:30.609 "superblock": true, 00:14:30.609 "num_base_bdevs": 3, 00:14:30.609 "num_base_bdevs_discovered": 0, 00:14:30.609 "num_base_bdevs_operational": 3, 00:14:30.609 "base_bdevs_list": [ 00:14:30.609 { 00:14:30.609 "name": "BaseBdev1", 00:14:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.609 "is_configured": false, 00:14:30.609 "data_offset": 0, 00:14:30.609 "data_size": 0 00:14:30.609 }, 00:14:30.609 { 00:14:30.609 "name": "BaseBdev2", 00:14:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.609 "is_configured": false, 00:14:30.609 "data_offset": 0, 00:14:30.609 "data_size": 0 00:14:30.609 }, 00:14:30.609 { 00:14:30.609 "name": "BaseBdev3", 00:14:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.609 "is_configured": false, 00:14:30.609 "data_offset": 0, 00:14:30.609 "data_size": 0 00:14:30.609 } 00:14:30.609 ] 00:14:30.609 }' 00:14:30.609 03:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.609 03:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.177 03:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:31.436 [2024-05-15 03:09:02.553790] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.436 [2024-05-15 03:09:02.553819] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a3de0 name Existed_Raid, state configuring 00:14:31.436 03:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:31.695 [2024-05-15 03:09:02.794449] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.695 [2024-05-15 03:09:02.794473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.695 [2024-05-15 03:09:02.794481] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.695 [2024-05-15 03:09:02.794490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.695 [2024-05-15 03:09:02.794497] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.695 [2024-05-15 03:09:02.794505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.695 03:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.953 [2024-05-15 03:09:03.056490] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.953 BaseBdev1 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:31.953 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:32.212 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.475 [ 00:14:32.475 { 00:14:32.475 "name": "BaseBdev1", 00:14:32.475 "aliases": [ 00:14:32.475 "45eae9b5-364f-4aac-8358-ed50c02c8b94" 00:14:32.475 ], 00:14:32.475 "product_name": "Malloc disk", 00:14:32.475 "block_size": 512, 00:14:32.475 "num_blocks": 65536, 00:14:32.475 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:32.475 "assigned_rate_limits": { 00:14:32.475 "rw_ios_per_sec": 0, 00:14:32.475 "rw_mbytes_per_sec": 0, 00:14:32.475 "r_mbytes_per_sec": 0, 00:14:32.475 "w_mbytes_per_sec": 0 00:14:32.475 }, 00:14:32.475 "claimed": true, 00:14:32.475 "claim_type": "exclusive_write", 00:14:32.475 "zoned": false, 00:14:32.475 "supported_io_types": { 00:14:32.475 "read": true, 00:14:32.475 "write": true, 00:14:32.475 "unmap": true, 00:14:32.475 "write_zeroes": true, 00:14:32.475 "flush": true, 00:14:32.475 "reset": true, 00:14:32.475 "compare": false, 00:14:32.475 "compare_and_write": false, 00:14:32.475 "abort": true, 00:14:32.475 "nvme_admin": false, 00:14:32.475 "nvme_io": false 00:14:32.475 }, 00:14:32.475 "memory_domains": [ 00:14:32.475 { 00:14:32.475 "dma_device_id": "system", 00:14:32.475 "dma_device_type": 1 00:14:32.475 }, 00:14:32.475 { 00:14:32.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.475 "dma_device_type": 2 00:14:32.475 } 00:14:32.475 ], 00:14:32.475 "driver_specific": {} 00:14:32.475 } 00:14:32.475 ] 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.475 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.735 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.735 "name": "Existed_Raid", 00:14:32.735 "uuid": "c8168c52-8228-4c49-b561-6001e13314b0", 00:14:32.735 "strip_size_kb": 0, 00:14:32.735 "state": "configuring", 00:14:32.735 "raid_level": "raid1", 00:14:32.735 "superblock": true, 00:14:32.735 "num_base_bdevs": 3, 00:14:32.735 "num_base_bdevs_discovered": 1, 00:14:32.735 "num_base_bdevs_operational": 3, 00:14:32.735 "base_bdevs_list": [ 00:14:32.735 { 00:14:32.735 "name": "BaseBdev1", 00:14:32.735 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:32.735 "is_configured": true, 00:14:32.735 "data_offset": 2048, 00:14:32.735 "data_size": 63488 00:14:32.735 }, 00:14:32.735 { 00:14:32.735 "name": "BaseBdev2", 00:14:32.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.735 "is_configured": false, 00:14:32.735 "data_offset": 0, 00:14:32.735 "data_size": 0 00:14:32.735 }, 00:14:32.735 { 00:14:32.735 "name": "BaseBdev3", 00:14:32.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.735 "is_configured": false, 00:14:32.735 "data_offset": 0, 00:14:32.735 "data_size": 0 00:14:32.735 } 00:14:32.735 ] 00:14:32.735 }' 00:14:32.735 03:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.735 03:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.301 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.560 [2024-05-15 03:09:04.668821] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.560 [2024-05-15 03:09:04.668869] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a36b0 name Existed_Raid, state configuring 00:14:33.560 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:33.818 [2024-05-15 03:09:04.921538] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.818 [2024-05-15 03:09:04.923116] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.818 [2024-05-15 03:09:04.923148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.818 [2024-05-15 03:09:04.923156] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.818 [2024-05-15 03:09:04.923164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.077 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.077 "name": "Existed_Raid", 00:14:34.077 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:34.077 "strip_size_kb": 0, 00:14:34.077 "state": "configuring", 00:14:34.077 "raid_level": "raid1", 00:14:34.077 "superblock": true, 00:14:34.077 "num_base_bdevs": 3, 00:14:34.077 "num_base_bdevs_discovered": 1, 00:14:34.077 "num_base_bdevs_operational": 3, 00:14:34.077 "base_bdevs_list": [ 00:14:34.077 { 00:14:34.077 "name": "BaseBdev1", 00:14:34.077 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:34.077 "is_configured": true, 00:14:34.077 "data_offset": 2048, 00:14:34.077 "data_size": 63488 00:14:34.077 }, 00:14:34.077 { 00:14:34.077 "name": "BaseBdev2", 00:14:34.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.077 "is_configured": false, 00:14:34.077 "data_offset": 0, 00:14:34.077 "data_size": 0 00:14:34.077 }, 00:14:34.077 { 00:14:34.077 "name": "BaseBdev3", 00:14:34.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.077 "is_configured": false, 00:14:34.077 "data_offset": 0, 00:14:34.077 "data_size": 0 00:14:34.077 } 00:14:34.077 ] 00:14:34.077 }' 00:14:34.077 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.077 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.012 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.012 [2024-05-15 03:09:06.051840] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.012 BaseBdev2 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:35.012 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:35.271 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.529 [ 00:14:35.529 { 00:14:35.529 "name": "BaseBdev2", 00:14:35.529 "aliases": [ 00:14:35.529 "8e6db05a-6b23-4bed-a290-6137cb087973" 00:14:35.529 ], 00:14:35.529 "product_name": "Malloc disk", 00:14:35.529 "block_size": 512, 00:14:35.529 "num_blocks": 65536, 00:14:35.529 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:35.529 "assigned_rate_limits": { 00:14:35.529 "rw_ios_per_sec": 0, 00:14:35.529 "rw_mbytes_per_sec": 0, 00:14:35.529 "r_mbytes_per_sec": 0, 00:14:35.529 "w_mbytes_per_sec": 0 00:14:35.529 }, 00:14:35.529 "claimed": true, 00:14:35.529 "claim_type": "exclusive_write", 00:14:35.529 "zoned": false, 00:14:35.529 "supported_io_types": { 00:14:35.529 "read": true, 00:14:35.529 "write": true, 00:14:35.529 "unmap": true, 00:14:35.529 "write_zeroes": true, 00:14:35.529 "flush": true, 00:14:35.529 "reset": true, 00:14:35.529 "compare": false, 00:14:35.529 "compare_and_write": false, 00:14:35.529 "abort": true, 00:14:35.529 "nvme_admin": false, 00:14:35.529 "nvme_io": false 00:14:35.529 }, 00:14:35.529 "memory_domains": [ 00:14:35.529 { 00:14:35.529 "dma_device_id": "system", 00:14:35.529 "dma_device_type": 1 00:14:35.529 }, 00:14:35.529 { 00:14:35.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.529 "dma_device_type": 2 00:14:35.529 } 00:14:35.529 ], 00:14:35.529 "driver_specific": {} 00:14:35.529 } 00:14:35.529 ] 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.529 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.788 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.788 "name": "Existed_Raid", 00:14:35.788 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:35.788 "strip_size_kb": 0, 00:14:35.788 "state": "configuring", 00:14:35.788 "raid_level": "raid1", 00:14:35.788 "superblock": true, 00:14:35.788 "num_base_bdevs": 3, 00:14:35.788 "num_base_bdevs_discovered": 2, 00:14:35.788 "num_base_bdevs_operational": 3, 00:14:35.788 "base_bdevs_list": [ 00:14:35.788 { 00:14:35.788 "name": "BaseBdev1", 00:14:35.788 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:35.788 "is_configured": true, 00:14:35.788 "data_offset": 2048, 00:14:35.788 "data_size": 63488 00:14:35.788 }, 00:14:35.788 { 00:14:35.788 "name": "BaseBdev2", 00:14:35.788 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:35.788 "is_configured": true, 00:14:35.788 "data_offset": 2048, 00:14:35.788 "data_size": 63488 00:14:35.788 }, 00:14:35.788 { 00:14:35.788 "name": "BaseBdev3", 00:14:35.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.788 "is_configured": false, 00:14:35.788 "data_offset": 0, 00:14:35.788 "data_size": 0 00:14:35.788 } 00:14:35.788 ] 00:14:35.788 }' 00:14:35.788 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.788 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.354 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.612 [2024-05-15 03:09:07.631270] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.612 [2024-05-15 03:09:07.631425] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a4760 00:14:36.612 [2024-05-15 03:09:07.631438] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.612 [2024-05-15 03:09:07.631619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9bb690 00:14:36.612 [2024-05-15 03:09:07.631746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a4760 00:14:36.612 [2024-05-15 03:09:07.631755] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9a4760 00:14:36.612 [2024-05-15 03:09:07.631863] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.612 BaseBdev3 00:14:36.612 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:36.613 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.871 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:37.129 [ 00:14:37.129 { 00:14:37.129 "name": "BaseBdev3", 00:14:37.129 "aliases": [ 00:14:37.129 "e1ece488-40bc-47a3-a401-3e8abc910f0b" 00:14:37.129 ], 00:14:37.129 "product_name": "Malloc disk", 00:14:37.129 "block_size": 512, 00:14:37.129 "num_blocks": 65536, 00:14:37.129 "uuid": "e1ece488-40bc-47a3-a401-3e8abc910f0b", 00:14:37.129 "assigned_rate_limits": { 00:14:37.129 "rw_ios_per_sec": 0, 00:14:37.129 "rw_mbytes_per_sec": 0, 00:14:37.129 "r_mbytes_per_sec": 0, 00:14:37.129 "w_mbytes_per_sec": 0 00:14:37.129 }, 00:14:37.129 "claimed": true, 00:14:37.129 "claim_type": "exclusive_write", 00:14:37.129 "zoned": false, 00:14:37.129 "supported_io_types": { 00:14:37.129 "read": true, 00:14:37.129 "write": true, 00:14:37.129 "unmap": true, 00:14:37.129 "write_zeroes": true, 00:14:37.129 "flush": true, 00:14:37.129 "reset": true, 00:14:37.129 "compare": false, 00:14:37.129 "compare_and_write": false, 00:14:37.129 "abort": true, 00:14:37.130 "nvme_admin": false, 00:14:37.130 "nvme_io": false 00:14:37.130 }, 00:14:37.130 "memory_domains": [ 00:14:37.130 { 00:14:37.130 "dma_device_id": "system", 00:14:37.130 "dma_device_type": 1 00:14:37.130 }, 00:14:37.130 { 00:14:37.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.130 "dma_device_type": 2 00:14:37.130 } 00:14:37.130 ], 00:14:37.130 "driver_specific": {} 00:14:37.130 } 00:14:37.130 ] 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.130 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.388 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.388 "name": "Existed_Raid", 00:14:37.388 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:37.388 "strip_size_kb": 0, 00:14:37.388 "state": "online", 00:14:37.388 "raid_level": "raid1", 00:14:37.388 "superblock": true, 00:14:37.389 "num_base_bdevs": 3, 00:14:37.389 "num_base_bdevs_discovered": 3, 00:14:37.389 "num_base_bdevs_operational": 3, 00:14:37.389 "base_bdevs_list": [ 00:14:37.389 { 00:14:37.389 "name": "BaseBdev1", 00:14:37.389 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:37.389 "is_configured": true, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 }, 00:14:37.389 { 00:14:37.389 "name": "BaseBdev2", 00:14:37.389 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:37.389 "is_configured": true, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 }, 00:14:37.389 { 00:14:37.389 "name": "BaseBdev3", 00:14:37.389 "uuid": "e1ece488-40bc-47a3-a401-3e8abc910f0b", 00:14:37.389 "is_configured": true, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 } 00:14:37.389 ] 00:14:37.389 }' 00:14:37.389 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.389 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:37.957 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:37.957 [2024-05-15 03:09:09.075549] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.957 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:37.957 "name": "Existed_Raid", 00:14:37.957 "aliases": [ 00:14:37.957 "3c8adab4-34bb-4d97-a4c7-adcd4b57c097" 00:14:37.957 ], 00:14:37.957 "product_name": "Raid Volume", 00:14:37.957 "block_size": 512, 00:14:37.957 "num_blocks": 63488, 00:14:37.957 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:37.957 "assigned_rate_limits": { 00:14:37.957 "rw_ios_per_sec": 0, 00:14:37.957 "rw_mbytes_per_sec": 0, 00:14:37.957 "r_mbytes_per_sec": 0, 00:14:37.957 "w_mbytes_per_sec": 0 00:14:37.957 }, 00:14:37.957 "claimed": false, 00:14:37.957 "zoned": false, 00:14:37.957 "supported_io_types": { 00:14:37.957 "read": true, 00:14:37.957 "write": true, 00:14:37.957 "unmap": false, 00:14:37.957 "write_zeroes": true, 00:14:37.957 "flush": false, 00:14:37.957 "reset": true, 00:14:37.957 "compare": false, 00:14:37.957 "compare_and_write": false, 00:14:37.957 "abort": false, 00:14:37.957 "nvme_admin": false, 00:14:37.957 "nvme_io": false 00:14:37.957 }, 00:14:37.957 "memory_domains": [ 00:14:37.957 { 00:14:37.957 "dma_device_id": "system", 00:14:37.957 "dma_device_type": 1 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.957 "dma_device_type": 2 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "dma_device_id": "system", 00:14:37.957 "dma_device_type": 1 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.957 "dma_device_type": 2 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "dma_device_id": "system", 00:14:37.957 "dma_device_type": 1 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.957 "dma_device_type": 2 00:14:37.957 } 00:14:37.957 ], 00:14:37.957 "driver_specific": { 00:14:37.957 "raid": { 00:14:37.957 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:37.957 "strip_size_kb": 0, 00:14:37.957 "state": "online", 00:14:37.957 "raid_level": "raid1", 00:14:37.957 "superblock": true, 00:14:37.957 "num_base_bdevs": 3, 00:14:37.957 "num_base_bdevs_discovered": 3, 00:14:37.957 "num_base_bdevs_operational": 3, 00:14:37.957 "base_bdevs_list": [ 00:14:37.957 { 00:14:37.957 "name": "BaseBdev1", 00:14:37.957 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:37.957 "is_configured": true, 00:14:37.957 "data_offset": 2048, 00:14:37.957 "data_size": 63488 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "name": "BaseBdev2", 00:14:37.957 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:37.957 "is_configured": true, 00:14:37.957 "data_offset": 2048, 00:14:37.957 "data_size": 63488 00:14:37.957 }, 00:14:37.957 { 00:14:37.957 "name": "BaseBdev3", 00:14:37.957 "uuid": "e1ece488-40bc-47a3-a401-3e8abc910f0b", 00:14:37.957 "is_configured": true, 00:14:37.957 "data_offset": 2048, 00:14:37.957 "data_size": 63488 00:14:37.957 } 00:14:37.957 ] 00:14:37.957 } 00:14:37.957 } 00:14:37.957 }' 00:14:37.957 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.216 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:38.216 BaseBdev2 00:14:38.216 BaseBdev3' 00:14:38.216 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:38.216 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:38.216 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:38.475 "name": "BaseBdev1", 00:14:38.475 "aliases": [ 00:14:38.475 "45eae9b5-364f-4aac-8358-ed50c02c8b94" 00:14:38.475 ], 00:14:38.475 "product_name": "Malloc disk", 00:14:38.475 "block_size": 512, 00:14:38.475 "num_blocks": 65536, 00:14:38.475 "uuid": "45eae9b5-364f-4aac-8358-ed50c02c8b94", 00:14:38.475 "assigned_rate_limits": { 00:14:38.475 "rw_ios_per_sec": 0, 00:14:38.475 "rw_mbytes_per_sec": 0, 00:14:38.475 "r_mbytes_per_sec": 0, 00:14:38.475 "w_mbytes_per_sec": 0 00:14:38.475 }, 00:14:38.475 "claimed": true, 00:14:38.475 "claim_type": "exclusive_write", 00:14:38.475 "zoned": false, 00:14:38.475 "supported_io_types": { 00:14:38.475 "read": true, 00:14:38.475 "write": true, 00:14:38.475 "unmap": true, 00:14:38.475 "write_zeroes": true, 00:14:38.475 "flush": true, 00:14:38.475 "reset": true, 00:14:38.475 "compare": false, 00:14:38.475 "compare_and_write": false, 00:14:38.475 "abort": true, 00:14:38.475 "nvme_admin": false, 00:14:38.475 "nvme_io": false 00:14:38.475 }, 00:14:38.475 "memory_domains": [ 00:14:38.475 { 00:14:38.475 "dma_device_id": "system", 00:14:38.475 "dma_device_type": 1 00:14:38.475 }, 00:14:38.475 { 00:14:38.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.475 "dma_device_type": 2 00:14:38.475 } 00:14:38.475 ], 00:14:38.475 "driver_specific": {} 00:14:38.475 }' 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:38.475 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:38.734 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:38.993 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:38.993 "name": "BaseBdev2", 00:14:38.993 "aliases": [ 00:14:38.993 "8e6db05a-6b23-4bed-a290-6137cb087973" 00:14:38.993 ], 00:14:38.993 "product_name": "Malloc disk", 00:14:38.993 "block_size": 512, 00:14:38.993 "num_blocks": 65536, 00:14:38.993 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:38.993 "assigned_rate_limits": { 00:14:38.993 "rw_ios_per_sec": 0, 00:14:38.993 "rw_mbytes_per_sec": 0, 00:14:38.993 "r_mbytes_per_sec": 0, 00:14:38.993 "w_mbytes_per_sec": 0 00:14:38.993 }, 00:14:38.993 "claimed": true, 00:14:38.993 "claim_type": "exclusive_write", 00:14:38.993 "zoned": false, 00:14:38.993 "supported_io_types": { 00:14:38.993 "read": true, 00:14:38.993 "write": true, 00:14:38.993 "unmap": true, 00:14:38.993 "write_zeroes": true, 00:14:38.993 "flush": true, 00:14:38.993 "reset": true, 00:14:38.993 "compare": false, 00:14:38.993 "compare_and_write": false, 00:14:38.993 "abort": true, 00:14:38.993 "nvme_admin": false, 00:14:38.993 "nvme_io": false 00:14:38.993 }, 00:14:38.993 "memory_domains": [ 00:14:38.993 { 00:14:38.993 "dma_device_id": "system", 00:14:38.993 "dma_device_type": 1 00:14:38.993 }, 00:14:38.993 { 00:14:38.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.993 "dma_device_type": 2 00:14:38.993 } 00:14:38.993 ], 00:14:38.993 "driver_specific": {} 00:14:38.993 }' 00:14:38.993 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:38.993 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:38.993 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:38.993 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.285 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:39.550 "name": "BaseBdev3", 00:14:39.550 "aliases": [ 00:14:39.550 "e1ece488-40bc-47a3-a401-3e8abc910f0b" 00:14:39.550 ], 00:14:39.550 "product_name": "Malloc disk", 00:14:39.550 "block_size": 512, 00:14:39.550 "num_blocks": 65536, 00:14:39.550 "uuid": "e1ece488-40bc-47a3-a401-3e8abc910f0b", 00:14:39.550 "assigned_rate_limits": { 00:14:39.550 "rw_ios_per_sec": 0, 00:14:39.550 "rw_mbytes_per_sec": 0, 00:14:39.550 "r_mbytes_per_sec": 0, 00:14:39.550 "w_mbytes_per_sec": 0 00:14:39.550 }, 00:14:39.550 "claimed": true, 00:14:39.550 "claim_type": "exclusive_write", 00:14:39.550 "zoned": false, 00:14:39.550 "supported_io_types": { 00:14:39.550 "read": true, 00:14:39.550 "write": true, 00:14:39.550 "unmap": true, 00:14:39.550 "write_zeroes": true, 00:14:39.550 "flush": true, 00:14:39.550 "reset": true, 00:14:39.550 "compare": false, 00:14:39.550 "compare_and_write": false, 00:14:39.550 "abort": true, 00:14:39.550 "nvme_admin": false, 00:14:39.550 "nvme_io": false 00:14:39.550 }, 00:14:39.550 "memory_domains": [ 00:14:39.550 { 00:14:39.550 "dma_device_id": "system", 00:14:39.550 "dma_device_type": 1 00:14:39.550 }, 00:14:39.550 { 00:14:39.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.550 "dma_device_type": 2 00:14:39.550 } 00:14:39.550 ], 00:14:39.550 "driver_specific": {} 00:14:39.550 }' 00:14:39.550 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.808 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:40.067 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:40.067 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:40.067 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:40.327 [2024-05-15 03:09:11.273310] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.327 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.586 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.586 "name": "Existed_Raid", 00:14:40.586 "uuid": "3c8adab4-34bb-4d97-a4c7-adcd4b57c097", 00:14:40.586 "strip_size_kb": 0, 00:14:40.586 "state": "online", 00:14:40.586 "raid_level": "raid1", 00:14:40.586 "superblock": true, 00:14:40.586 "num_base_bdevs": 3, 00:14:40.586 "num_base_bdevs_discovered": 2, 00:14:40.586 "num_base_bdevs_operational": 2, 00:14:40.586 "base_bdevs_list": [ 00:14:40.586 { 00:14:40.586 "name": null, 00:14:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.586 "is_configured": false, 00:14:40.586 "data_offset": 2048, 00:14:40.586 "data_size": 63488 00:14:40.586 }, 00:14:40.586 { 00:14:40.586 "name": "BaseBdev2", 00:14:40.586 "uuid": "8e6db05a-6b23-4bed-a290-6137cb087973", 00:14:40.586 "is_configured": true, 00:14:40.586 "data_offset": 2048, 00:14:40.586 "data_size": 63488 00:14:40.586 }, 00:14:40.586 { 00:14:40.586 "name": "BaseBdev3", 00:14:40.586 "uuid": "e1ece488-40bc-47a3-a401-3e8abc910f0b", 00:14:40.586 "is_configured": true, 00:14:40.586 "data_offset": 2048, 00:14:40.586 "data_size": 63488 00:14:40.586 } 00:14:40.586 ] 00:14:40.586 }' 00:14:40.586 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.586 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.154 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:41.154 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.154 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.154 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:41.413 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:41.413 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.413 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:41.672 [2024-05-15 03:09:12.674320] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.672 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.672 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.672 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.672 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:41.931 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:41.931 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.931 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:42.189 [2024-05-15 03:09:13.194354] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.189 [2024-05-15 03:09:13.194423] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.189 [2024-05-15 03:09:13.205036] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.189 [2024-05-15 03:09:13.205093] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.189 [2024-05-15 03:09:13.205103] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a4760 name Existed_Raid, state offline 00:14:42.189 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:42.189 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:42.189 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.189 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:42.456 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.714 BaseBdev2 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:42.714 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.974 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.233 [ 00:14:43.233 { 00:14:43.233 "name": "BaseBdev2", 00:14:43.233 "aliases": [ 00:14:43.233 "4619fb68-18f5-441f-b199-36705a7c6748" 00:14:43.233 ], 00:14:43.233 "product_name": "Malloc disk", 00:14:43.233 "block_size": 512, 00:14:43.233 "num_blocks": 65536, 00:14:43.233 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:43.233 "assigned_rate_limits": { 00:14:43.233 "rw_ios_per_sec": 0, 00:14:43.233 "rw_mbytes_per_sec": 0, 00:14:43.233 "r_mbytes_per_sec": 0, 00:14:43.233 "w_mbytes_per_sec": 0 00:14:43.233 }, 00:14:43.233 "claimed": false, 00:14:43.233 "zoned": false, 00:14:43.233 "supported_io_types": { 00:14:43.233 "read": true, 00:14:43.233 "write": true, 00:14:43.233 "unmap": true, 00:14:43.233 "write_zeroes": true, 00:14:43.233 "flush": true, 00:14:43.233 "reset": true, 00:14:43.233 "compare": false, 00:14:43.234 "compare_and_write": false, 00:14:43.234 "abort": true, 00:14:43.234 "nvme_admin": false, 00:14:43.234 "nvme_io": false 00:14:43.234 }, 00:14:43.234 "memory_domains": [ 00:14:43.234 { 00:14:43.234 "dma_device_id": "system", 00:14:43.234 "dma_device_type": 1 00:14:43.234 }, 00:14:43.234 { 00:14:43.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.234 "dma_device_type": 2 00:14:43.234 } 00:14:43.234 ], 00:14:43.234 "driver_specific": {} 00:14:43.234 } 00:14:43.234 ] 00:14:43.234 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:43.234 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:43.234 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:43.234 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.492 BaseBdev3 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:43.492 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.750 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.010 [ 00:14:44.010 { 00:14:44.010 "name": "BaseBdev3", 00:14:44.010 "aliases": [ 00:14:44.010 "c3af495d-2627-48a9-afd3-5e67aa898c47" 00:14:44.010 ], 00:14:44.010 "product_name": "Malloc disk", 00:14:44.010 "block_size": 512, 00:14:44.010 "num_blocks": 65536, 00:14:44.010 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:44.010 "assigned_rate_limits": { 00:14:44.010 "rw_ios_per_sec": 0, 00:14:44.010 "rw_mbytes_per_sec": 0, 00:14:44.010 "r_mbytes_per_sec": 0, 00:14:44.010 "w_mbytes_per_sec": 0 00:14:44.010 }, 00:14:44.010 "claimed": false, 00:14:44.010 "zoned": false, 00:14:44.010 "supported_io_types": { 00:14:44.010 "read": true, 00:14:44.010 "write": true, 00:14:44.010 "unmap": true, 00:14:44.010 "write_zeroes": true, 00:14:44.010 "flush": true, 00:14:44.010 "reset": true, 00:14:44.010 "compare": false, 00:14:44.010 "compare_and_write": false, 00:14:44.010 "abort": true, 00:14:44.010 "nvme_admin": false, 00:14:44.010 "nvme_io": false 00:14:44.010 }, 00:14:44.010 "memory_domains": [ 00:14:44.010 { 00:14:44.010 "dma_device_id": "system", 00:14:44.010 "dma_device_type": 1 00:14:44.010 }, 00:14:44.010 { 00:14:44.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.010 "dma_device_type": 2 00:14:44.010 } 00:14:44.010 ], 00:14:44.010 "driver_specific": {} 00:14:44.010 } 00:14:44.010 ] 00:14:44.010 03:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:44.010 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:44.010 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:44.010 03:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:44.269 [2024-05-15 03:09:15.214898] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.269 [2024-05-15 03:09:15.214936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.269 [2024-05-15 03:09:15.214954] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.269 [2024-05-15 03:09:15.216352] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.269 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.528 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.528 "name": "Existed_Raid", 00:14:44.528 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:44.528 "strip_size_kb": 0, 00:14:44.528 "state": "configuring", 00:14:44.528 "raid_level": "raid1", 00:14:44.528 "superblock": true, 00:14:44.528 "num_base_bdevs": 3, 00:14:44.528 "num_base_bdevs_discovered": 2, 00:14:44.528 "num_base_bdevs_operational": 3, 00:14:44.528 "base_bdevs_list": [ 00:14:44.528 { 00:14:44.528 "name": "BaseBdev1", 00:14:44.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.528 "is_configured": false, 00:14:44.528 "data_offset": 0, 00:14:44.528 "data_size": 0 00:14:44.528 }, 00:14:44.528 { 00:14:44.528 "name": "BaseBdev2", 00:14:44.528 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:44.528 "is_configured": true, 00:14:44.528 "data_offset": 2048, 00:14:44.528 "data_size": 63488 00:14:44.528 }, 00:14:44.528 { 00:14:44.528 "name": "BaseBdev3", 00:14:44.528 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:44.528 "is_configured": true, 00:14:44.528 "data_offset": 2048, 00:14:44.528 "data_size": 63488 00:14:44.528 } 00:14:44.528 ] 00:14:44.528 }' 00:14:44.528 03:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.528 03:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.094 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:45.351 [2024-05-15 03:09:16.353922] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:45.351 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.352 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.610 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.610 "name": "Existed_Raid", 00:14:45.610 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:45.610 "strip_size_kb": 0, 00:14:45.610 "state": "configuring", 00:14:45.610 "raid_level": "raid1", 00:14:45.610 "superblock": true, 00:14:45.610 "num_base_bdevs": 3, 00:14:45.610 "num_base_bdevs_discovered": 1, 00:14:45.610 "num_base_bdevs_operational": 3, 00:14:45.610 "base_bdevs_list": [ 00:14:45.610 { 00:14:45.610 "name": "BaseBdev1", 00:14:45.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.610 "is_configured": false, 00:14:45.610 "data_offset": 0, 00:14:45.610 "data_size": 0 00:14:45.610 }, 00:14:45.610 { 00:14:45.610 "name": null, 00:14:45.610 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:45.610 "is_configured": false, 00:14:45.610 "data_offset": 2048, 00:14:45.610 "data_size": 63488 00:14:45.610 }, 00:14:45.610 { 00:14:45.610 "name": "BaseBdev3", 00:14:45.610 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:45.610 "is_configured": true, 00:14:45.610 "data_offset": 2048, 00:14:45.610 "data_size": 63488 00:14:45.610 } 00:14:45.610 ] 00:14:45.610 }' 00:14:45.610 03:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.610 03:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.174 03:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.174 03:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.432 03:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:46.432 03:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.691 [2024-05-15 03:09:17.756979] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.691 BaseBdev1 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:46.691 03:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.949 03:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.207 [ 00:14:47.207 { 00:14:47.207 "name": "BaseBdev1", 00:14:47.207 "aliases": [ 00:14:47.207 "71025048-95ce-4a2a-80a2-eb5130071b44" 00:14:47.207 ], 00:14:47.207 "product_name": "Malloc disk", 00:14:47.207 "block_size": 512, 00:14:47.207 "num_blocks": 65536, 00:14:47.207 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:47.207 "assigned_rate_limits": { 00:14:47.207 "rw_ios_per_sec": 0, 00:14:47.207 "rw_mbytes_per_sec": 0, 00:14:47.207 "r_mbytes_per_sec": 0, 00:14:47.207 "w_mbytes_per_sec": 0 00:14:47.207 }, 00:14:47.207 "claimed": true, 00:14:47.207 "claim_type": "exclusive_write", 00:14:47.207 "zoned": false, 00:14:47.207 "supported_io_types": { 00:14:47.207 "read": true, 00:14:47.207 "write": true, 00:14:47.207 "unmap": true, 00:14:47.207 "write_zeroes": true, 00:14:47.207 "flush": true, 00:14:47.207 "reset": true, 00:14:47.207 "compare": false, 00:14:47.207 "compare_and_write": false, 00:14:47.207 "abort": true, 00:14:47.207 "nvme_admin": false, 00:14:47.207 "nvme_io": false 00:14:47.207 }, 00:14:47.207 "memory_domains": [ 00:14:47.207 { 00:14:47.207 "dma_device_id": "system", 00:14:47.207 "dma_device_type": 1 00:14:47.207 }, 00:14:47.207 { 00:14:47.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.207 "dma_device_type": 2 00:14:47.207 } 00:14:47.207 ], 00:14:47.207 "driver_specific": {} 00:14:47.207 } 00:14:47.207 ] 00:14:47.207 03:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:47.207 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:47.207 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.207 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.208 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.466 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.466 "name": "Existed_Raid", 00:14:47.466 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:47.466 "strip_size_kb": 0, 00:14:47.466 "state": "configuring", 00:14:47.466 "raid_level": "raid1", 00:14:47.466 "superblock": true, 00:14:47.466 "num_base_bdevs": 3, 00:14:47.466 "num_base_bdevs_discovered": 2, 00:14:47.466 "num_base_bdevs_operational": 3, 00:14:47.466 "base_bdevs_list": [ 00:14:47.466 { 00:14:47.466 "name": "BaseBdev1", 00:14:47.466 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:47.466 "is_configured": true, 00:14:47.466 "data_offset": 2048, 00:14:47.466 "data_size": 63488 00:14:47.466 }, 00:14:47.466 { 00:14:47.466 "name": null, 00:14:47.466 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:47.466 "is_configured": false, 00:14:47.466 "data_offset": 2048, 00:14:47.466 "data_size": 63488 00:14:47.466 }, 00:14:47.466 { 00:14:47.466 "name": "BaseBdev3", 00:14:47.466 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:47.466 "is_configured": true, 00:14:47.466 "data_offset": 2048, 00:14:47.466 "data_size": 63488 00:14:47.466 } 00:14:47.466 ] 00:14:47.466 }' 00:14:47.466 03:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.466 03:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.031 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.031 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.288 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:48.288 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:48.546 [2024-05-15 03:09:19.613989] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.546 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.804 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.804 "name": "Existed_Raid", 00:14:48.804 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:48.804 "strip_size_kb": 0, 00:14:48.804 "state": "configuring", 00:14:48.804 "raid_level": "raid1", 00:14:48.804 "superblock": true, 00:14:48.804 "num_base_bdevs": 3, 00:14:48.804 "num_base_bdevs_discovered": 1, 00:14:48.804 "num_base_bdevs_operational": 3, 00:14:48.804 "base_bdevs_list": [ 00:14:48.804 { 00:14:48.804 "name": "BaseBdev1", 00:14:48.804 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:48.804 "is_configured": true, 00:14:48.804 "data_offset": 2048, 00:14:48.804 "data_size": 63488 00:14:48.804 }, 00:14:48.804 { 00:14:48.804 "name": null, 00:14:48.804 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:48.804 "is_configured": false, 00:14:48.804 "data_offset": 2048, 00:14:48.804 "data_size": 63488 00:14:48.804 }, 00:14:48.804 { 00:14:48.804 "name": null, 00:14:48.804 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:48.804 "is_configured": false, 00:14:48.804 "data_offset": 2048, 00:14:48.804 "data_size": 63488 00:14:48.804 } 00:14:48.804 ] 00:14:48.804 }' 00:14:48.804 03:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.804 03:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.370 03:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.370 03:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.628 03:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:49.628 03:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:49.886 [2024-05-15 03:09:20.993709] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.886 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.144 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.144 "name": "Existed_Raid", 00:14:50.144 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:50.144 "strip_size_kb": 0, 00:14:50.144 "state": "configuring", 00:14:50.144 "raid_level": "raid1", 00:14:50.144 "superblock": true, 00:14:50.144 "num_base_bdevs": 3, 00:14:50.144 "num_base_bdevs_discovered": 2, 00:14:50.144 "num_base_bdevs_operational": 3, 00:14:50.144 "base_bdevs_list": [ 00:14:50.144 { 00:14:50.144 "name": "BaseBdev1", 00:14:50.144 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:50.144 "is_configured": true, 00:14:50.144 "data_offset": 2048, 00:14:50.144 "data_size": 63488 00:14:50.144 }, 00:14:50.144 { 00:14:50.144 "name": null, 00:14:50.144 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:50.144 "is_configured": false, 00:14:50.144 "data_offset": 2048, 00:14:50.144 "data_size": 63488 00:14:50.144 }, 00:14:50.144 { 00:14:50.144 "name": "BaseBdev3", 00:14:50.144 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:50.144 "is_configured": true, 00:14:50.144 "data_offset": 2048, 00:14:50.144 "data_size": 63488 00:14:50.144 } 00:14:50.144 ] 00:14:50.144 }' 00:14:50.144 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.144 03:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.078 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.078 03:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.078 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:51.078 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:51.336 [2024-05-15 03:09:22.377558] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.336 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.594 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.594 "name": "Existed_Raid", 00:14:51.594 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:51.594 "strip_size_kb": 0, 00:14:51.594 "state": "configuring", 00:14:51.594 "raid_level": "raid1", 00:14:51.594 "superblock": true, 00:14:51.594 "num_base_bdevs": 3, 00:14:51.594 "num_base_bdevs_discovered": 1, 00:14:51.594 "num_base_bdevs_operational": 3, 00:14:51.594 "base_bdevs_list": [ 00:14:51.594 { 00:14:51.594 "name": null, 00:14:51.594 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:51.594 "is_configured": false, 00:14:51.594 "data_offset": 2048, 00:14:51.594 "data_size": 63488 00:14:51.594 }, 00:14:51.594 { 00:14:51.594 "name": null, 00:14:51.594 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:51.594 "is_configured": false, 00:14:51.594 "data_offset": 2048, 00:14:51.594 "data_size": 63488 00:14:51.594 }, 00:14:51.594 { 00:14:51.594 "name": "BaseBdev3", 00:14:51.594 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:51.594 "is_configured": true, 00:14:51.595 "data_offset": 2048, 00:14:51.595 "data_size": 63488 00:14:51.595 } 00:14:51.595 ] 00:14:51.595 }' 00:14:51.595 03:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.595 03:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.160 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.160 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.417 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:52.417 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.675 [2024-05-15 03:09:23.767747] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.675 03:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.932 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.932 "name": "Existed_Raid", 00:14:52.932 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:52.932 "strip_size_kb": 0, 00:14:52.932 "state": "configuring", 00:14:52.932 "raid_level": "raid1", 00:14:52.932 "superblock": true, 00:14:52.932 "num_base_bdevs": 3, 00:14:52.932 "num_base_bdevs_discovered": 2, 00:14:52.932 "num_base_bdevs_operational": 3, 00:14:52.932 "base_bdevs_list": [ 00:14:52.932 { 00:14:52.932 "name": null, 00:14:52.932 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:52.932 "is_configured": false, 00:14:52.932 "data_offset": 2048, 00:14:52.932 "data_size": 63488 00:14:52.932 }, 00:14:52.932 { 00:14:52.932 "name": "BaseBdev2", 00:14:52.932 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:52.932 "is_configured": true, 00:14:52.932 "data_offset": 2048, 00:14:52.932 "data_size": 63488 00:14:52.932 }, 00:14:52.932 { 00:14:52.932 "name": "BaseBdev3", 00:14:52.932 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:52.933 "is_configured": true, 00:14:52.933 "data_offset": 2048, 00:14:52.933 "data_size": 63488 00:14:52.933 } 00:14:52.933 ] 00:14:52.933 }' 00:14:52.933 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.933 03:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.867 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.867 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.867 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:53.867 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.867 03:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:54.126 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 71025048-95ce-4a2a-80a2-eb5130071b44 00:14:54.384 [2024-05-15 03:09:25.351274] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.384 [2024-05-15 03:09:25.351418] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a5930 00:14:54.384 [2024-05-15 03:09:25.351430] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.384 [2024-05-15 03:09:25.351614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb49830 00:14:54.384 [2024-05-15 03:09:25.351737] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a5930 00:14:54.384 [2024-05-15 03:09:25.351745] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9a5930 00:14:54.384 [2024-05-15 03:09:25.351836] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.384 NewBaseBdev 00:14:54.384 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:54.384 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:54.384 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:54.384 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:54.385 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:54.385 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:54.385 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.642 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.901 [ 00:14:54.901 { 00:14:54.901 "name": "NewBaseBdev", 00:14:54.901 "aliases": [ 00:14:54.901 "71025048-95ce-4a2a-80a2-eb5130071b44" 00:14:54.901 ], 00:14:54.901 "product_name": "Malloc disk", 00:14:54.901 "block_size": 512, 00:14:54.901 "num_blocks": 65536, 00:14:54.901 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:54.901 "assigned_rate_limits": { 00:14:54.901 "rw_ios_per_sec": 0, 00:14:54.901 "rw_mbytes_per_sec": 0, 00:14:54.901 "r_mbytes_per_sec": 0, 00:14:54.901 "w_mbytes_per_sec": 0 00:14:54.901 }, 00:14:54.901 "claimed": true, 00:14:54.901 "claim_type": "exclusive_write", 00:14:54.901 "zoned": false, 00:14:54.901 "supported_io_types": { 00:14:54.901 "read": true, 00:14:54.901 "write": true, 00:14:54.901 "unmap": true, 00:14:54.901 "write_zeroes": true, 00:14:54.901 "flush": true, 00:14:54.901 "reset": true, 00:14:54.901 "compare": false, 00:14:54.901 "compare_and_write": false, 00:14:54.901 "abort": true, 00:14:54.901 "nvme_admin": false, 00:14:54.901 "nvme_io": false 00:14:54.901 }, 00:14:54.901 "memory_domains": [ 00:14:54.901 { 00:14:54.901 "dma_device_id": "system", 00:14:54.901 "dma_device_type": 1 00:14:54.901 }, 00:14:54.901 { 00:14:54.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.901 "dma_device_type": 2 00:14:54.901 } 00:14:54.901 ], 00:14:54.901 "driver_specific": {} 00:14:54.901 } 00:14:54.901 ] 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.901 03:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.160 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.160 "name": "Existed_Raid", 00:14:55.160 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:55.160 "strip_size_kb": 0, 00:14:55.160 "state": "online", 00:14:55.160 "raid_level": "raid1", 00:14:55.160 "superblock": true, 00:14:55.160 "num_base_bdevs": 3, 00:14:55.160 "num_base_bdevs_discovered": 3, 00:14:55.160 "num_base_bdevs_operational": 3, 00:14:55.160 "base_bdevs_list": [ 00:14:55.160 { 00:14:55.160 "name": "NewBaseBdev", 00:14:55.160 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:55.160 "is_configured": true, 00:14:55.160 "data_offset": 2048, 00:14:55.160 "data_size": 63488 00:14:55.160 }, 00:14:55.160 { 00:14:55.160 "name": "BaseBdev2", 00:14:55.160 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:55.160 "is_configured": true, 00:14:55.160 "data_offset": 2048, 00:14:55.160 "data_size": 63488 00:14:55.160 }, 00:14:55.160 { 00:14:55.160 "name": "BaseBdev3", 00:14:55.160 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:55.160 "is_configured": true, 00:14:55.160 "data_offset": 2048, 00:14:55.160 "data_size": 63488 00:14:55.160 } 00:14:55.160 ] 00:14:55.160 }' 00:14:55.160 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.160 03:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:55.725 03:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:55.985 [2024-05-15 03:09:26.995972] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.985 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:55.985 "name": "Existed_Raid", 00:14:55.985 "aliases": [ 00:14:55.985 "ccb00ea2-b002-41b7-80fb-e8b45716cf7d" 00:14:55.985 ], 00:14:55.985 "product_name": "Raid Volume", 00:14:55.985 "block_size": 512, 00:14:55.985 "num_blocks": 63488, 00:14:55.985 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:55.985 "assigned_rate_limits": { 00:14:55.985 "rw_ios_per_sec": 0, 00:14:55.985 "rw_mbytes_per_sec": 0, 00:14:55.985 "r_mbytes_per_sec": 0, 00:14:55.985 "w_mbytes_per_sec": 0 00:14:55.985 }, 00:14:55.985 "claimed": false, 00:14:55.985 "zoned": false, 00:14:55.985 "supported_io_types": { 00:14:55.985 "read": true, 00:14:55.985 "write": true, 00:14:55.985 "unmap": false, 00:14:55.985 "write_zeroes": true, 00:14:55.985 "flush": false, 00:14:55.985 "reset": true, 00:14:55.985 "compare": false, 00:14:55.985 "compare_and_write": false, 00:14:55.985 "abort": false, 00:14:55.985 "nvme_admin": false, 00:14:55.985 "nvme_io": false 00:14:55.985 }, 00:14:55.985 "memory_domains": [ 00:14:55.985 { 00:14:55.985 "dma_device_id": "system", 00:14:55.985 "dma_device_type": 1 00:14:55.985 }, 00:14:55.985 { 00:14:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.985 "dma_device_type": 2 00:14:55.985 }, 00:14:55.985 { 00:14:55.985 "dma_device_id": "system", 00:14:55.985 "dma_device_type": 1 00:14:55.985 }, 00:14:55.985 { 00:14:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.985 "dma_device_type": 2 00:14:55.985 }, 00:14:55.985 { 00:14:55.985 "dma_device_id": "system", 00:14:55.985 "dma_device_type": 1 00:14:55.985 }, 00:14:55.985 { 00:14:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.985 "dma_device_type": 2 00:14:55.985 } 00:14:55.985 ], 00:14:55.985 "driver_specific": { 00:14:55.985 "raid": { 00:14:55.985 "uuid": "ccb00ea2-b002-41b7-80fb-e8b45716cf7d", 00:14:55.985 "strip_size_kb": 0, 00:14:55.985 "state": "online", 00:14:55.985 "raid_level": "raid1", 00:14:55.985 "superblock": true, 00:14:55.985 "num_base_bdevs": 3, 00:14:55.985 "num_base_bdevs_discovered": 3, 00:14:55.986 "num_base_bdevs_operational": 3, 00:14:55.986 "base_bdevs_list": [ 00:14:55.986 { 00:14:55.986 "name": "NewBaseBdev", 00:14:55.986 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:55.986 "is_configured": true, 00:14:55.986 "data_offset": 2048, 00:14:55.986 "data_size": 63488 00:14:55.986 }, 00:14:55.986 { 00:14:55.986 "name": "BaseBdev2", 00:14:55.986 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:55.986 "is_configured": true, 00:14:55.986 "data_offset": 2048, 00:14:55.986 "data_size": 63488 00:14:55.986 }, 00:14:55.986 { 00:14:55.986 "name": "BaseBdev3", 00:14:55.986 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:55.986 "is_configured": true, 00:14:55.986 "data_offset": 2048, 00:14:55.986 "data_size": 63488 00:14:55.986 } 00:14:55.986 ] 00:14:55.986 } 00:14:55.986 } 00:14:55.986 }' 00:14:55.986 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.986 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:55.986 BaseBdev2 00:14:55.986 BaseBdev3' 00:14:55.986 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:55.986 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:55.986 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:56.276 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:56.276 "name": "NewBaseBdev", 00:14:56.276 "aliases": [ 00:14:56.276 "71025048-95ce-4a2a-80a2-eb5130071b44" 00:14:56.276 ], 00:14:56.276 "product_name": "Malloc disk", 00:14:56.276 "block_size": 512, 00:14:56.276 "num_blocks": 65536, 00:14:56.276 "uuid": "71025048-95ce-4a2a-80a2-eb5130071b44", 00:14:56.276 "assigned_rate_limits": { 00:14:56.276 "rw_ios_per_sec": 0, 00:14:56.276 "rw_mbytes_per_sec": 0, 00:14:56.276 "r_mbytes_per_sec": 0, 00:14:56.276 "w_mbytes_per_sec": 0 00:14:56.276 }, 00:14:56.276 "claimed": true, 00:14:56.276 "claim_type": "exclusive_write", 00:14:56.276 "zoned": false, 00:14:56.276 "supported_io_types": { 00:14:56.276 "read": true, 00:14:56.276 "write": true, 00:14:56.276 "unmap": true, 00:14:56.276 "write_zeroes": true, 00:14:56.276 "flush": true, 00:14:56.276 "reset": true, 00:14:56.276 "compare": false, 00:14:56.276 "compare_and_write": false, 00:14:56.276 "abort": true, 00:14:56.276 "nvme_admin": false, 00:14:56.276 "nvme_io": false 00:14:56.276 }, 00:14:56.276 "memory_domains": [ 00:14:56.276 { 00:14:56.276 "dma_device_id": "system", 00:14:56.276 "dma_device_type": 1 00:14:56.276 }, 00:14:56.276 { 00:14:56.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.276 "dma_device_type": 2 00:14:56.276 } 00:14:56.276 ], 00:14:56.276 "driver_specific": {} 00:14:56.276 }' 00:14:56.276 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:56.276 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:56.276 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:56.276 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:56.535 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:56.793 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:56.793 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:56.793 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:56.793 "name": "BaseBdev2", 00:14:56.793 "aliases": [ 00:14:56.793 "4619fb68-18f5-441f-b199-36705a7c6748" 00:14:56.793 ], 00:14:56.793 "product_name": "Malloc disk", 00:14:56.793 "block_size": 512, 00:14:56.793 "num_blocks": 65536, 00:14:56.793 "uuid": "4619fb68-18f5-441f-b199-36705a7c6748", 00:14:56.793 "assigned_rate_limits": { 00:14:56.793 "rw_ios_per_sec": 0, 00:14:56.793 "rw_mbytes_per_sec": 0, 00:14:56.793 "r_mbytes_per_sec": 0, 00:14:56.793 "w_mbytes_per_sec": 0 00:14:56.793 }, 00:14:56.793 "claimed": true, 00:14:56.793 "claim_type": "exclusive_write", 00:14:56.793 "zoned": false, 00:14:56.793 "supported_io_types": { 00:14:56.793 "read": true, 00:14:56.793 "write": true, 00:14:56.793 "unmap": true, 00:14:56.793 "write_zeroes": true, 00:14:56.793 "flush": true, 00:14:56.793 "reset": true, 00:14:56.793 "compare": false, 00:14:56.793 "compare_and_write": false, 00:14:56.793 "abort": true, 00:14:56.793 "nvme_admin": false, 00:14:56.793 "nvme_io": false 00:14:56.793 }, 00:14:56.793 "memory_domains": [ 00:14:56.793 { 00:14:56.793 "dma_device_id": "system", 00:14:56.793 "dma_device_type": 1 00:14:56.793 }, 00:14:56.793 { 00:14:56.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.793 "dma_device_type": 2 00:14:56.793 } 00:14:56.793 ], 00:14:56.793 "driver_specific": {} 00:14:56.793 }' 00:14:56.793 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:56.793 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:57.051 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:57.051 03:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:57.051 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:57.309 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:57.309 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:57.309 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:57.310 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:57.568 "name": "BaseBdev3", 00:14:57.568 "aliases": [ 00:14:57.568 "c3af495d-2627-48a9-afd3-5e67aa898c47" 00:14:57.568 ], 00:14:57.568 "product_name": "Malloc disk", 00:14:57.568 "block_size": 512, 00:14:57.568 "num_blocks": 65536, 00:14:57.568 "uuid": "c3af495d-2627-48a9-afd3-5e67aa898c47", 00:14:57.568 "assigned_rate_limits": { 00:14:57.568 "rw_ios_per_sec": 0, 00:14:57.568 "rw_mbytes_per_sec": 0, 00:14:57.568 "r_mbytes_per_sec": 0, 00:14:57.568 "w_mbytes_per_sec": 0 00:14:57.568 }, 00:14:57.568 "claimed": true, 00:14:57.568 "claim_type": "exclusive_write", 00:14:57.568 "zoned": false, 00:14:57.568 "supported_io_types": { 00:14:57.568 "read": true, 00:14:57.568 "write": true, 00:14:57.568 "unmap": true, 00:14:57.568 "write_zeroes": true, 00:14:57.568 "flush": true, 00:14:57.568 "reset": true, 00:14:57.568 "compare": false, 00:14:57.568 "compare_and_write": false, 00:14:57.568 "abort": true, 00:14:57.568 "nvme_admin": false, 00:14:57.568 "nvme_io": false 00:14:57.568 }, 00:14:57.568 "memory_domains": [ 00:14:57.568 { 00:14:57.568 "dma_device_id": "system", 00:14:57.568 "dma_device_type": 1 00:14:57.568 }, 00:14:57.568 { 00:14:57.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.568 "dma_device_type": 2 00:14:57.568 } 00:14:57.568 ], 00:14:57.568 "driver_specific": {} 00:14:57.568 }' 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:57.568 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:57.826 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.826 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:57.826 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:57.826 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:57.826 03:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.085 [2024-05-15 03:09:29.085306] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.085 [2024-05-15 03:09:29.085330] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.085 [2024-05-15 03:09:29.085382] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.085 [2024-05-15 03:09:29.085664] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.085 [2024-05-15 03:09:29.085674] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a5930 name Existed_Raid, state offline 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4092601 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4092601 ']' 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4092601 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4092601 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4092601' 00:14:58.085 killing process with pid 4092601 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4092601 00:14:58.085 [2024-05-15 03:09:29.149272] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.085 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4092601 00:14:58.085 [2024-05-15 03:09:29.174526] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.344 03:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:14:58.344 00:14:58.344 real 0m29.188s 00:14:58.344 user 0m54.774s 00:14:58.344 sys 0m4.063s 00:14:58.344 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:58.344 03:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.344 ************************************ 00:14:58.344 END TEST raid_state_function_test_sb 00:14:58.344 ************************************ 00:14:58.344 03:09:29 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:58.344 03:09:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:58.344 03:09:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:58.344 03:09:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.344 ************************************ 00:14:58.344 START TEST raid_superblock_test 00:14:58.344 ************************************ 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4098456 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4098456 /var/tmp/spdk-raid.sock 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4098456 ']' 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:58.344 03:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.602 [2024-05-15 03:09:29.530093] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:58.602 [2024-05-15 03:09:29.530147] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098456 ] 00:14:58.602 [2024-05-15 03:09:29.626181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.602 [2024-05-15 03:09:29.719835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.860 [2024-05-15 03:09:29.785534] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.860 [2024-05-15 03:09:29.785565] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.427 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:59.686 malloc1 00:14:59.686 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.944 [2024-05-15 03:09:30.979765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.944 [2024-05-15 03:09:30.979812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.944 [2024-05-15 03:09:30.979832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2742a00 00:14:59.944 [2024-05-15 03:09:30.979842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.944 [2024-05-15 03:09:30.981559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.944 [2024-05-15 03:09:30.981586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.944 pt1 00:14:59.944 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.944 03:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.944 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:59.944 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:59.944 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.945 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.945 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.945 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.945 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:00.202 malloc2 00:15:00.202 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.460 [2024-05-15 03:09:31.485672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.460 [2024-05-15 03:09:31.485713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.461 [2024-05-15 03:09:31.485732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27435f0 00:15:00.461 [2024-05-15 03:09:31.485742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.461 [2024-05-15 03:09:31.487306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.461 [2024-05-15 03:09:31.487333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.461 pt2 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.461 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:00.719 malloc3 00:15:00.719 03:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.976 [2024-05-15 03:09:31.987632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.976 [2024-05-15 03:09:31.987674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.976 [2024-05-15 03:09:31.987690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e8900 00:15:00.977 [2024-05-15 03:09:31.987700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.977 [2024-05-15 03:09:31.989226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.977 [2024-05-15 03:09:31.989253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.977 pt3 00:15:00.977 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.977 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.977 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:01.234 [2024-05-15 03:09:32.228276] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.234 [2024-05-15 03:09:32.229567] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.234 [2024-05-15 03:09:32.229623] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.234 [2024-05-15 03:09:32.229781] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x28eb0f0 00:15:01.234 [2024-05-15 03:09:32.229792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:01.234 [2024-05-15 03:09:32.229986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2743f30 00:15:01.234 [2024-05-15 03:09:32.230142] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28eb0f0 00:15:01.234 [2024-05-15 03:09:32.230151] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28eb0f0 00:15:01.234 [2024-05-15 03:09:32.230250] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.234 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.492 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.492 "name": "raid_bdev1", 00:15:01.492 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:01.492 "strip_size_kb": 0, 00:15:01.492 "state": "online", 00:15:01.492 "raid_level": "raid1", 00:15:01.492 "superblock": true, 00:15:01.492 "num_base_bdevs": 3, 00:15:01.492 "num_base_bdevs_discovered": 3, 00:15:01.492 "num_base_bdevs_operational": 3, 00:15:01.492 "base_bdevs_list": [ 00:15:01.492 { 00:15:01.492 "name": "pt1", 00:15:01.492 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:01.492 "is_configured": true, 00:15:01.492 "data_offset": 2048, 00:15:01.492 "data_size": 63488 00:15:01.492 }, 00:15:01.492 { 00:15:01.492 "name": "pt2", 00:15:01.492 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:01.492 "is_configured": true, 00:15:01.492 "data_offset": 2048, 00:15:01.492 "data_size": 63488 00:15:01.492 }, 00:15:01.492 { 00:15:01.492 "name": "pt3", 00:15:01.492 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:01.492 "is_configured": true, 00:15:01.492 "data_offset": 2048, 00:15:01.492 "data_size": 63488 00:15:01.492 } 00:15:01.492 ] 00:15:01.492 }' 00:15:01.492 03:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.492 03:09:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.057 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:02.315 [2024-05-15 03:09:33.339494] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:02.315 "name": "raid_bdev1", 00:15:02.315 "aliases": [ 00:15:02.315 "760bbd9e-6944-4fe8-8605-53d9b0100787" 00:15:02.315 ], 00:15:02.315 "product_name": "Raid Volume", 00:15:02.315 "block_size": 512, 00:15:02.315 "num_blocks": 63488, 00:15:02.315 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:02.315 "assigned_rate_limits": { 00:15:02.315 "rw_ios_per_sec": 0, 00:15:02.315 "rw_mbytes_per_sec": 0, 00:15:02.315 "r_mbytes_per_sec": 0, 00:15:02.315 "w_mbytes_per_sec": 0 00:15:02.315 }, 00:15:02.315 "claimed": false, 00:15:02.315 "zoned": false, 00:15:02.315 "supported_io_types": { 00:15:02.315 "read": true, 00:15:02.315 "write": true, 00:15:02.315 "unmap": false, 00:15:02.315 "write_zeroes": true, 00:15:02.315 "flush": false, 00:15:02.315 "reset": true, 00:15:02.315 "compare": false, 00:15:02.315 "compare_and_write": false, 00:15:02.315 "abort": false, 00:15:02.315 "nvme_admin": false, 00:15:02.315 "nvme_io": false 00:15:02.315 }, 00:15:02.315 "memory_domains": [ 00:15:02.315 { 00:15:02.315 "dma_device_id": "system", 00:15:02.315 "dma_device_type": 1 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.315 "dma_device_type": 2 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "dma_device_id": "system", 00:15:02.315 "dma_device_type": 1 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.315 "dma_device_type": 2 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "dma_device_id": "system", 00:15:02.315 "dma_device_type": 1 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.315 "dma_device_type": 2 00:15:02.315 } 00:15:02.315 ], 00:15:02.315 "driver_specific": { 00:15:02.315 "raid": { 00:15:02.315 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:02.315 "strip_size_kb": 0, 00:15:02.315 "state": "online", 00:15:02.315 "raid_level": "raid1", 00:15:02.315 "superblock": true, 00:15:02.315 "num_base_bdevs": 3, 00:15:02.315 "num_base_bdevs_discovered": 3, 00:15:02.315 "num_base_bdevs_operational": 3, 00:15:02.315 "base_bdevs_list": [ 00:15:02.315 { 00:15:02.315 "name": "pt1", 00:15:02.315 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:02.315 "is_configured": true, 00:15:02.315 "data_offset": 2048, 00:15:02.315 "data_size": 63488 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "name": "pt2", 00:15:02.315 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:02.315 "is_configured": true, 00:15:02.315 "data_offset": 2048, 00:15:02.315 "data_size": 63488 00:15:02.315 }, 00:15:02.315 { 00:15:02.315 "name": "pt3", 00:15:02.315 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:02.315 "is_configured": true, 00:15:02.315 "data_offset": 2048, 00:15:02.315 "data_size": 63488 00:15:02.315 } 00:15:02.315 ] 00:15:02.315 } 00:15:02.315 } 00:15:02.315 }' 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:02.315 pt2 00:15:02.315 pt3' 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:02.315 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:02.572 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:02.572 "name": "pt1", 00:15:02.572 "aliases": [ 00:15:02.572 "c24cccd4-b258-5a39-a3d3-d5d45a5fed44" 00:15:02.572 ], 00:15:02.572 "product_name": "passthru", 00:15:02.572 "block_size": 512, 00:15:02.572 "num_blocks": 65536, 00:15:02.572 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:02.572 "assigned_rate_limits": { 00:15:02.572 "rw_ios_per_sec": 0, 00:15:02.572 "rw_mbytes_per_sec": 0, 00:15:02.572 "r_mbytes_per_sec": 0, 00:15:02.572 "w_mbytes_per_sec": 0 00:15:02.572 }, 00:15:02.572 "claimed": true, 00:15:02.572 "claim_type": "exclusive_write", 00:15:02.572 "zoned": false, 00:15:02.572 "supported_io_types": { 00:15:02.572 "read": true, 00:15:02.572 "write": true, 00:15:02.572 "unmap": true, 00:15:02.572 "write_zeroes": true, 00:15:02.572 "flush": true, 00:15:02.572 "reset": true, 00:15:02.572 "compare": false, 00:15:02.572 "compare_and_write": false, 00:15:02.572 "abort": true, 00:15:02.572 "nvme_admin": false, 00:15:02.572 "nvme_io": false 00:15:02.572 }, 00:15:02.572 "memory_domains": [ 00:15:02.572 { 00:15:02.572 "dma_device_id": "system", 00:15:02.572 "dma_device_type": 1 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.572 "dma_device_type": 2 00:15:02.572 } 00:15:02.572 ], 00:15:02.572 "driver_specific": { 00:15:02.572 "passthru": { 00:15:02.572 "name": "pt1", 00:15:02.572 "base_bdev_name": "malloc1" 00:15:02.572 } 00:15:02.572 } 00:15:02.572 }' 00:15:02.572 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:02.572 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:02.829 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:02.830 03:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.087 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:03.087 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:03.087 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:03.087 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:03.344 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:03.344 "name": "pt2", 00:15:03.344 "aliases": [ 00:15:03.344 "ade1da64-e325-5dbe-a20a-b972358a638f" 00:15:03.344 ], 00:15:03.344 "product_name": "passthru", 00:15:03.344 "block_size": 512, 00:15:03.344 "num_blocks": 65536, 00:15:03.344 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:03.344 "assigned_rate_limits": { 00:15:03.344 "rw_ios_per_sec": 0, 00:15:03.344 "rw_mbytes_per_sec": 0, 00:15:03.344 "r_mbytes_per_sec": 0, 00:15:03.344 "w_mbytes_per_sec": 0 00:15:03.344 }, 00:15:03.344 "claimed": true, 00:15:03.344 "claim_type": "exclusive_write", 00:15:03.344 "zoned": false, 00:15:03.344 "supported_io_types": { 00:15:03.344 "read": true, 00:15:03.344 "write": true, 00:15:03.344 "unmap": true, 00:15:03.344 "write_zeroes": true, 00:15:03.344 "flush": true, 00:15:03.344 "reset": true, 00:15:03.344 "compare": false, 00:15:03.344 "compare_and_write": false, 00:15:03.344 "abort": true, 00:15:03.344 "nvme_admin": false, 00:15:03.344 "nvme_io": false 00:15:03.344 }, 00:15:03.344 "memory_domains": [ 00:15:03.345 { 00:15:03.345 "dma_device_id": "system", 00:15:03.345 "dma_device_type": 1 00:15:03.345 }, 00:15:03.345 { 00:15:03.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.345 "dma_device_type": 2 00:15:03.345 } 00:15:03.345 ], 00:15:03.345 "driver_specific": { 00:15:03.345 "passthru": { 00:15:03.345 "name": "pt2", 00:15:03.345 "base_bdev_name": "malloc2" 00:15:03.345 } 00:15:03.345 } 00:15:03.345 }' 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.345 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:03.602 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:03.861 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:03.861 "name": "pt3", 00:15:03.861 "aliases": [ 00:15:03.861 "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2" 00:15:03.861 ], 00:15:03.861 "product_name": "passthru", 00:15:03.861 "block_size": 512, 00:15:03.861 "num_blocks": 65536, 00:15:03.861 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:03.861 "assigned_rate_limits": { 00:15:03.861 "rw_ios_per_sec": 0, 00:15:03.861 "rw_mbytes_per_sec": 0, 00:15:03.861 "r_mbytes_per_sec": 0, 00:15:03.861 "w_mbytes_per_sec": 0 00:15:03.861 }, 00:15:03.861 "claimed": true, 00:15:03.861 "claim_type": "exclusive_write", 00:15:03.861 "zoned": false, 00:15:03.861 "supported_io_types": { 00:15:03.861 "read": true, 00:15:03.861 "write": true, 00:15:03.861 "unmap": true, 00:15:03.861 "write_zeroes": true, 00:15:03.861 "flush": true, 00:15:03.861 "reset": true, 00:15:03.861 "compare": false, 00:15:03.861 "compare_and_write": false, 00:15:03.861 "abort": true, 00:15:03.861 "nvme_admin": false, 00:15:03.861 "nvme_io": false 00:15:03.861 }, 00:15:03.861 "memory_domains": [ 00:15:03.861 { 00:15:03.861 "dma_device_id": "system", 00:15:03.861 "dma_device_type": 1 00:15:03.861 }, 00:15:03.861 { 00:15:03.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.861 "dma_device_type": 2 00:15:03.861 } 00:15:03.861 ], 00:15:03.861 "driver_specific": { 00:15:03.861 "passthru": { 00:15:03.861 "name": "pt3", 00:15:03.861 "base_bdev_name": "malloc3" 00:15:03.861 } 00:15:03.861 } 00:15:03.861 }' 00:15:03.861 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.861 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.861 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:03.861 03:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.119 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:04.377 [2024-05-15 03:09:35.449132] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.377 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=760bbd9e-6944-4fe8-8605-53d9b0100787 00:15:04.377 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 760bbd9e-6944-4fe8-8605-53d9b0100787 ']' 00:15:04.377 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:04.636 [2024-05-15 03:09:35.709576] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.636 [2024-05-15 03:09:35.709600] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.636 [2024-05-15 03:09:35.709649] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.636 [2024-05-15 03:09:35.709714] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.636 [2024-05-15 03:09:35.709723] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28eb0f0 name raid_bdev1, state offline 00:15:04.636 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.636 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:04.893 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:04.893 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:04.893 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.893 03:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:05.151 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.151 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:05.409 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.409 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:05.668 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:05.668 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.927 03:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:06.186 [2024-05-15 03:09:37.225548] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.186 [2024-05-15 03:09:37.226980] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.186 [2024-05-15 03:09:37.227024] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:06.186 [2024-05-15 03:09:37.227068] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:06.186 [2024-05-15 03:09:37.227104] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:06.186 [2024-05-15 03:09:37.227124] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:06.186 [2024-05-15 03:09:37.227139] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.186 [2024-05-15 03:09:37.227146] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28eb450 name raid_bdev1, state configuring 00:15:06.186 request: 00:15:06.186 { 00:15:06.186 "name": "raid_bdev1", 00:15:06.186 "raid_level": "raid1", 00:15:06.186 "base_bdevs": [ 00:15:06.186 "malloc1", 00:15:06.186 "malloc2", 00:15:06.186 "malloc3" 00:15:06.186 ], 00:15:06.186 "superblock": false, 00:15:06.186 "method": "bdev_raid_create", 00:15:06.186 "req_id": 1 00:15:06.186 } 00:15:06.186 Got JSON-RPC error response 00:15:06.186 response: 00:15:06.186 { 00:15:06.186 "code": -17, 00:15:06.186 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.186 } 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:06.186 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.444 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:06.444 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:06.444 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.702 [2024-05-15 03:09:37.730841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.702 [2024-05-15 03:09:37.730885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.702 [2024-05-15 03:09:37.730902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e6320 00:15:06.702 [2024-05-15 03:09:37.730912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.702 [2024-05-15 03:09:37.732557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.702 [2024-05-15 03:09:37.732582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.702 [2024-05-15 03:09:37.732641] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:06.702 [2024-05-15 03:09:37.732666] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.702 pt1 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.702 03:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.960 03:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.960 "name": "raid_bdev1", 00:15:06.960 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:06.960 "strip_size_kb": 0, 00:15:06.960 "state": "configuring", 00:15:06.960 "raid_level": "raid1", 00:15:06.960 "superblock": true, 00:15:06.960 "num_base_bdevs": 3, 00:15:06.960 "num_base_bdevs_discovered": 1, 00:15:06.960 "num_base_bdevs_operational": 3, 00:15:06.960 "base_bdevs_list": [ 00:15:06.960 { 00:15:06.960 "name": "pt1", 00:15:06.960 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:06.960 "is_configured": true, 00:15:06.960 "data_offset": 2048, 00:15:06.960 "data_size": 63488 00:15:06.960 }, 00:15:06.960 { 00:15:06.960 "name": null, 00:15:06.960 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:06.960 "is_configured": false, 00:15:06.960 "data_offset": 2048, 00:15:06.960 "data_size": 63488 00:15:06.960 }, 00:15:06.960 { 00:15:06.960 "name": null, 00:15:06.960 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:06.960 "is_configured": false, 00:15:06.960 "data_offset": 2048, 00:15:06.960 "data_size": 63488 00:15:06.960 } 00:15:06.960 ] 00:15:06.960 }' 00:15:06.960 03:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.960 03:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.526 03:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:07.526 03:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.783 [2024-05-15 03:09:38.865888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.783 [2024-05-15 03:09:38.865938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.783 [2024-05-15 03:09:38.865955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28f3f80 00:15:07.783 [2024-05-15 03:09:38.865965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.783 [2024-05-15 03:09:38.866319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.784 [2024-05-15 03:09:38.866334] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.784 [2024-05-15 03:09:38.866395] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:07.784 [2024-05-15 03:09:38.866412] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.784 pt2 00:15:07.784 03:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:08.041 [2024-05-15 03:09:39.118573] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.041 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.042 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.042 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.299 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.299 "name": "raid_bdev1", 00:15:08.299 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:08.299 "strip_size_kb": 0, 00:15:08.299 "state": "configuring", 00:15:08.299 "raid_level": "raid1", 00:15:08.299 "superblock": true, 00:15:08.299 "num_base_bdevs": 3, 00:15:08.299 "num_base_bdevs_discovered": 1, 00:15:08.299 "num_base_bdevs_operational": 3, 00:15:08.299 "base_bdevs_list": [ 00:15:08.299 { 00:15:08.299 "name": "pt1", 00:15:08.299 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:08.299 "is_configured": true, 00:15:08.299 "data_offset": 2048, 00:15:08.299 "data_size": 63488 00:15:08.299 }, 00:15:08.299 { 00:15:08.299 "name": null, 00:15:08.299 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:08.299 "is_configured": false, 00:15:08.299 "data_offset": 2048, 00:15:08.299 "data_size": 63488 00:15:08.299 }, 00:15:08.299 { 00:15:08.299 "name": null, 00:15:08.299 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:08.299 "is_configured": false, 00:15:08.299 "data_offset": 2048, 00:15:08.299 "data_size": 63488 00:15:08.299 } 00:15:08.299 ] 00:15:08.299 }' 00:15:08.299 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.299 03:09:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.864 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:08.864 03:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:08.864 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.122 [2024-05-15 03:09:40.233562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.122 [2024-05-15 03:09:40.233614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.122 [2024-05-15 03:09:40.233634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2739780 00:15:09.122 [2024-05-15 03:09:40.233643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.122 [2024-05-15 03:09:40.233995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.122 [2024-05-15 03:09:40.234010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.122 [2024-05-15 03:09:40.234072] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:09.122 [2024-05-15 03:09:40.234089] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.122 pt2 00:15:09.122 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.122 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.122 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.380 [2024-05-15 03:09:40.498256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.380 [2024-05-15 03:09:40.498284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.380 [2024-05-15 03:09:40.498300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273a7f0 00:15:09.380 [2024-05-15 03:09:40.498309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.380 [2024-05-15 03:09:40.498618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.380 [2024-05-15 03:09:40.498633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.380 [2024-05-15 03:09:40.498683] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:09.380 [2024-05-15 03:09:40.498700] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.380 [2024-05-15 03:09:40.498809] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x28e7150 00:15:09.380 [2024-05-15 03:09:40.498823] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.380 [2024-05-15 03:09:40.499008] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28f4330 00:15:09.380 [2024-05-15 03:09:40.499154] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28e7150 00:15:09.380 [2024-05-15 03:09:40.499163] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28e7150 00:15:09.380 [2024-05-15 03:09:40.499267] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.380 pt3 00:15:09.380 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.381 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.639 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.639 "name": "raid_bdev1", 00:15:09.639 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:09.639 "strip_size_kb": 0, 00:15:09.639 "state": "online", 00:15:09.639 "raid_level": "raid1", 00:15:09.639 "superblock": true, 00:15:09.639 "num_base_bdevs": 3, 00:15:09.639 "num_base_bdevs_discovered": 3, 00:15:09.639 "num_base_bdevs_operational": 3, 00:15:09.639 "base_bdevs_list": [ 00:15:09.639 { 00:15:09.639 "name": "pt1", 00:15:09.639 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:09.639 "is_configured": true, 00:15:09.639 "data_offset": 2048, 00:15:09.639 "data_size": 63488 00:15:09.639 }, 00:15:09.639 { 00:15:09.639 "name": "pt2", 00:15:09.639 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:09.639 "is_configured": true, 00:15:09.639 "data_offset": 2048, 00:15:09.639 "data_size": 63488 00:15:09.639 }, 00:15:09.639 { 00:15:09.639 "name": "pt3", 00:15:09.639 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:09.639 "is_configured": true, 00:15:09.639 "data_offset": 2048, 00:15:09.639 "data_size": 63488 00:15:09.639 } 00:15:09.639 ] 00:15:09.639 }' 00:15:09.639 03:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.639 03:09:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.573 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.573 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:10.573 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:10.573 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:10.573 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:10.574 [2024-05-15 03:09:41.625544] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:10.574 "name": "raid_bdev1", 00:15:10.574 "aliases": [ 00:15:10.574 "760bbd9e-6944-4fe8-8605-53d9b0100787" 00:15:10.574 ], 00:15:10.574 "product_name": "Raid Volume", 00:15:10.574 "block_size": 512, 00:15:10.574 "num_blocks": 63488, 00:15:10.574 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:10.574 "assigned_rate_limits": { 00:15:10.574 "rw_ios_per_sec": 0, 00:15:10.574 "rw_mbytes_per_sec": 0, 00:15:10.574 "r_mbytes_per_sec": 0, 00:15:10.574 "w_mbytes_per_sec": 0 00:15:10.574 }, 00:15:10.574 "claimed": false, 00:15:10.574 "zoned": false, 00:15:10.574 "supported_io_types": { 00:15:10.574 "read": true, 00:15:10.574 "write": true, 00:15:10.574 "unmap": false, 00:15:10.574 "write_zeroes": true, 00:15:10.574 "flush": false, 00:15:10.574 "reset": true, 00:15:10.574 "compare": false, 00:15:10.574 "compare_and_write": false, 00:15:10.574 "abort": false, 00:15:10.574 "nvme_admin": false, 00:15:10.574 "nvme_io": false 00:15:10.574 }, 00:15:10.574 "memory_domains": [ 00:15:10.574 { 00:15:10.574 "dma_device_id": "system", 00:15:10.574 "dma_device_type": 1 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.574 "dma_device_type": 2 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "dma_device_id": "system", 00:15:10.574 "dma_device_type": 1 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.574 "dma_device_type": 2 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "dma_device_id": "system", 00:15:10.574 "dma_device_type": 1 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.574 "dma_device_type": 2 00:15:10.574 } 00:15:10.574 ], 00:15:10.574 "driver_specific": { 00:15:10.574 "raid": { 00:15:10.574 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:10.574 "strip_size_kb": 0, 00:15:10.574 "state": "online", 00:15:10.574 "raid_level": "raid1", 00:15:10.574 "superblock": true, 00:15:10.574 "num_base_bdevs": 3, 00:15:10.574 "num_base_bdevs_discovered": 3, 00:15:10.574 "num_base_bdevs_operational": 3, 00:15:10.574 "base_bdevs_list": [ 00:15:10.574 { 00:15:10.574 "name": "pt1", 00:15:10.574 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:10.574 "is_configured": true, 00:15:10.574 "data_offset": 2048, 00:15:10.574 "data_size": 63488 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "name": "pt2", 00:15:10.574 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:10.574 "is_configured": true, 00:15:10.574 "data_offset": 2048, 00:15:10.574 "data_size": 63488 00:15:10.574 }, 00:15:10.574 { 00:15:10.574 "name": "pt3", 00:15:10.574 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:10.574 "is_configured": true, 00:15:10.574 "data_offset": 2048, 00:15:10.574 "data_size": 63488 00:15:10.574 } 00:15:10.574 ] 00:15:10.574 } 00:15:10.574 } 00:15:10.574 }' 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:10.574 pt2 00:15:10.574 pt3' 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.574 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:10.832 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:10.832 "name": "pt1", 00:15:10.832 "aliases": [ 00:15:10.832 "c24cccd4-b258-5a39-a3d3-d5d45a5fed44" 00:15:10.832 ], 00:15:10.832 "product_name": "passthru", 00:15:10.832 "block_size": 512, 00:15:10.832 "num_blocks": 65536, 00:15:10.832 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:10.832 "assigned_rate_limits": { 00:15:10.832 "rw_ios_per_sec": 0, 00:15:10.832 "rw_mbytes_per_sec": 0, 00:15:10.832 "r_mbytes_per_sec": 0, 00:15:10.832 "w_mbytes_per_sec": 0 00:15:10.832 }, 00:15:10.832 "claimed": true, 00:15:10.832 "claim_type": "exclusive_write", 00:15:10.832 "zoned": false, 00:15:10.832 "supported_io_types": { 00:15:10.832 "read": true, 00:15:10.832 "write": true, 00:15:10.832 "unmap": true, 00:15:10.832 "write_zeroes": true, 00:15:10.832 "flush": true, 00:15:10.832 "reset": true, 00:15:10.832 "compare": false, 00:15:10.832 "compare_and_write": false, 00:15:10.832 "abort": true, 00:15:10.832 "nvme_admin": false, 00:15:10.832 "nvme_io": false 00:15:10.832 }, 00:15:10.832 "memory_domains": [ 00:15:10.832 { 00:15:10.832 "dma_device_id": "system", 00:15:10.832 "dma_device_type": 1 00:15:10.832 }, 00:15:10.832 { 00:15:10.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.832 "dma_device_type": 2 00:15:10.832 } 00:15:10.832 ], 00:15:10.832 "driver_specific": { 00:15:10.832 "passthru": { 00:15:10.832 "name": "pt1", 00:15:10.832 "base_bdev_name": "malloc1" 00:15:10.832 } 00:15:10.832 } 00:15:10.832 }' 00:15:10.832 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.090 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.090 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.347 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.347 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:11.347 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:11.348 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.348 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:11.605 "name": "pt2", 00:15:11.605 "aliases": [ 00:15:11.605 "ade1da64-e325-5dbe-a20a-b972358a638f" 00:15:11.605 ], 00:15:11.605 "product_name": "passthru", 00:15:11.605 "block_size": 512, 00:15:11.605 "num_blocks": 65536, 00:15:11.605 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:11.605 "assigned_rate_limits": { 00:15:11.605 "rw_ios_per_sec": 0, 00:15:11.605 "rw_mbytes_per_sec": 0, 00:15:11.605 "r_mbytes_per_sec": 0, 00:15:11.605 "w_mbytes_per_sec": 0 00:15:11.605 }, 00:15:11.605 "claimed": true, 00:15:11.605 "claim_type": "exclusive_write", 00:15:11.605 "zoned": false, 00:15:11.605 "supported_io_types": { 00:15:11.605 "read": true, 00:15:11.605 "write": true, 00:15:11.605 "unmap": true, 00:15:11.605 "write_zeroes": true, 00:15:11.605 "flush": true, 00:15:11.605 "reset": true, 00:15:11.605 "compare": false, 00:15:11.605 "compare_and_write": false, 00:15:11.605 "abort": true, 00:15:11.605 "nvme_admin": false, 00:15:11.605 "nvme_io": false 00:15:11.605 }, 00:15:11.605 "memory_domains": [ 00:15:11.605 { 00:15:11.605 "dma_device_id": "system", 00:15:11.605 "dma_device_type": 1 00:15:11.605 }, 00:15:11.605 { 00:15:11.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.605 "dma_device_type": 2 00:15:11.605 } 00:15:11.605 ], 00:15:11.605 "driver_specific": { 00:15:11.605 "passthru": { 00:15:11.605 "name": "pt2", 00:15:11.605 "base_bdev_name": "malloc2" 00:15:11.605 } 00:15:11.605 } 00:15:11.605 }' 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.605 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:11.862 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:12.130 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:12.130 "name": "pt3", 00:15:12.130 "aliases": [ 00:15:12.130 "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2" 00:15:12.130 ], 00:15:12.130 "product_name": "passthru", 00:15:12.130 "block_size": 512, 00:15:12.130 "num_blocks": 65536, 00:15:12.130 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:12.130 "assigned_rate_limits": { 00:15:12.130 "rw_ios_per_sec": 0, 00:15:12.130 "rw_mbytes_per_sec": 0, 00:15:12.130 "r_mbytes_per_sec": 0, 00:15:12.130 "w_mbytes_per_sec": 0 00:15:12.130 }, 00:15:12.130 "claimed": true, 00:15:12.130 "claim_type": "exclusive_write", 00:15:12.130 "zoned": false, 00:15:12.130 "supported_io_types": { 00:15:12.130 "read": true, 00:15:12.130 "write": true, 00:15:12.130 "unmap": true, 00:15:12.130 "write_zeroes": true, 00:15:12.130 "flush": true, 00:15:12.130 "reset": true, 00:15:12.130 "compare": false, 00:15:12.130 "compare_and_write": false, 00:15:12.130 "abort": true, 00:15:12.130 "nvme_admin": false, 00:15:12.131 "nvme_io": false 00:15:12.131 }, 00:15:12.131 "memory_domains": [ 00:15:12.131 { 00:15:12.131 "dma_device_id": "system", 00:15:12.131 "dma_device_type": 1 00:15:12.131 }, 00:15:12.131 { 00:15:12.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.131 "dma_device_type": 2 00:15:12.131 } 00:15:12.131 ], 00:15:12.131 "driver_specific": { 00:15:12.131 "passthru": { 00:15:12.131 "name": "pt3", 00:15:12.131 "base_bdev_name": "malloc3" 00:15:12.131 } 00:15:12.131 } 00:15:12.131 }' 00:15:12.131 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:12.131 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:12.425 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:12.683 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.683 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:12.683 [2024-05-15 03:09:43.819492] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.940 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 760bbd9e-6944-4fe8-8605-53d9b0100787 '!=' 760bbd9e-6944-4fe8-8605-53d9b0100787 ']' 00:15:12.940 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:12.940 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:12.940 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:15:12.940 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:12.940 [2024-05-15 03:09:44.075934] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.940 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.197 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.197 "name": "raid_bdev1", 00:15:13.197 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:13.197 "strip_size_kb": 0, 00:15:13.197 "state": "online", 00:15:13.197 "raid_level": "raid1", 00:15:13.197 "superblock": true, 00:15:13.197 "num_base_bdevs": 3, 00:15:13.197 "num_base_bdevs_discovered": 2, 00:15:13.197 "num_base_bdevs_operational": 2, 00:15:13.197 "base_bdevs_list": [ 00:15:13.197 { 00:15:13.197 "name": null, 00:15:13.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.197 "is_configured": false, 00:15:13.197 "data_offset": 2048, 00:15:13.198 "data_size": 63488 00:15:13.198 }, 00:15:13.198 { 00:15:13.198 "name": "pt2", 00:15:13.198 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:13.198 "is_configured": true, 00:15:13.198 "data_offset": 2048, 00:15:13.198 "data_size": 63488 00:15:13.198 }, 00:15:13.198 { 00:15:13.198 "name": "pt3", 00:15:13.198 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:13.198 "is_configured": true, 00:15:13.198 "data_offset": 2048, 00:15:13.198 "data_size": 63488 00:15:13.198 } 00:15:13.198 ] 00:15:13.198 }' 00:15:13.198 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.198 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.130 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:14.130 [2024-05-15 03:09:45.190897] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.130 [2024-05-15 03:09:45.190920] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.130 [2024-05-15 03:09:45.190964] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.130 [2024-05-15 03:09:45.191020] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.130 [2024-05-15 03:09:45.191029] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28e7150 name raid_bdev1, state offline 00:15:14.130 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.130 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:14.388 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:14.388 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:14.388 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:14.388 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.388 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:14.645 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.646 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.646 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:14.903 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.903 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.903 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:14.903 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:14.903 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.161 [2024-05-15 03:09:46.213562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.161 [2024-05-15 03:09:46.213602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.161 [2024-05-15 03:09:46.213617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273d700 00:15:15.161 [2024-05-15 03:09:46.213626] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.161 [2024-05-15 03:09:46.215302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.161 [2024-05-15 03:09:46.215328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.161 [2024-05-15 03:09:46.215392] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:15.161 [2024-05-15 03:09:46.215417] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.161 pt2 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.161 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.419 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.419 "name": "raid_bdev1", 00:15:15.419 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:15.419 "strip_size_kb": 0, 00:15:15.419 "state": "configuring", 00:15:15.419 "raid_level": "raid1", 00:15:15.419 "superblock": true, 00:15:15.419 "num_base_bdevs": 3, 00:15:15.419 "num_base_bdevs_discovered": 1, 00:15:15.419 "num_base_bdevs_operational": 2, 00:15:15.419 "base_bdevs_list": [ 00:15:15.419 { 00:15:15.419 "name": null, 00:15:15.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.419 "is_configured": false, 00:15:15.419 "data_offset": 2048, 00:15:15.419 "data_size": 63488 00:15:15.419 }, 00:15:15.419 { 00:15:15.419 "name": "pt2", 00:15:15.419 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:15.419 "is_configured": true, 00:15:15.419 "data_offset": 2048, 00:15:15.419 "data_size": 63488 00:15:15.419 }, 00:15:15.419 { 00:15:15.419 "name": null, 00:15:15.419 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:15.419 "is_configured": false, 00:15:15.419 "data_offset": 2048, 00:15:15.419 "data_size": 63488 00:15:15.419 } 00:15:15.419 ] 00:15:15.419 }' 00:15:15.419 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.419 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:15.984 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:15.984 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:15.984 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.243 [2024-05-15 03:09:47.348610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.243 [2024-05-15 03:09:47.348655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.243 [2024-05-15 03:09:47.348672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e8d40 00:15:16.243 [2024-05-15 03:09:47.348681] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.243 [2024-05-15 03:09:47.349021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.243 [2024-05-15 03:09:47.349036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.243 [2024-05-15 03:09:47.349095] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:16.243 [2024-05-15 03:09:47.349112] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.243 [2024-05-15 03:09:47.349210] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x28e7e10 00:15:16.243 [2024-05-15 03:09:47.349224] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:16.243 [2024-05-15 03:09:47.349399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x273cf90 00:15:16.243 [2024-05-15 03:09:47.349537] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28e7e10 00:15:16.243 [2024-05-15 03:09:47.349546] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28e7e10 00:15:16.243 [2024-05-15 03:09:47.349646] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.243 pt3 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.243 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.502 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.502 "name": "raid_bdev1", 00:15:16.502 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:16.502 "strip_size_kb": 0, 00:15:16.502 "state": "online", 00:15:16.502 "raid_level": "raid1", 00:15:16.502 "superblock": true, 00:15:16.502 "num_base_bdevs": 3, 00:15:16.502 "num_base_bdevs_discovered": 2, 00:15:16.502 "num_base_bdevs_operational": 2, 00:15:16.502 "base_bdevs_list": [ 00:15:16.502 { 00:15:16.502 "name": null, 00:15:16.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.502 "is_configured": false, 00:15:16.502 "data_offset": 2048, 00:15:16.502 "data_size": 63488 00:15:16.502 }, 00:15:16.502 { 00:15:16.502 "name": "pt2", 00:15:16.502 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:16.502 "is_configured": true, 00:15:16.502 "data_offset": 2048, 00:15:16.502 "data_size": 63488 00:15:16.502 }, 00:15:16.502 { 00:15:16.502 "name": "pt3", 00:15:16.502 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:16.502 "is_configured": true, 00:15:16.502 "data_offset": 2048, 00:15:16.502 "data_size": 63488 00:15:16.502 } 00:15:16.502 ] 00:15:16.502 }' 00:15:16.502 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.502 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.069 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 3 -gt 2 ']' 00:15:17.069 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:17.328 [2024-05-15 03:09:48.423466] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.328 [2024-05-15 03:09:48.423490] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.328 [2024-05-15 03:09:48.423541] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.328 [2024-05-15 03:09:48.423597] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.328 [2024-05-15 03:09:48.423606] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28e7e10 name raid_bdev1, state offline 00:15:17.328 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.328 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:15:17.586 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:15:17.586 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:15:17.586 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.845 [2024-05-15 03:09:48.928796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.845 [2024-05-15 03:09:48.928842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.845 [2024-05-15 03:09:48.928867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27399b0 00:15:17.845 [2024-05-15 03:09:48.928876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.845 [2024-05-15 03:09:48.930558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.845 [2024-05-15 03:09:48.930584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.845 [2024-05-15 03:09:48.930646] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:17.845 [2024-05-15 03:09:48.930670] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.845 pt1 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.845 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.104 03:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.104 "name": "raid_bdev1", 00:15:18.104 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:18.104 "strip_size_kb": 0, 00:15:18.104 "state": "configuring", 00:15:18.104 "raid_level": "raid1", 00:15:18.104 "superblock": true, 00:15:18.104 "num_base_bdevs": 3, 00:15:18.104 "num_base_bdevs_discovered": 1, 00:15:18.104 "num_base_bdevs_operational": 3, 00:15:18.104 "base_bdevs_list": [ 00:15:18.104 { 00:15:18.104 "name": "pt1", 00:15:18.104 "uuid": "c24cccd4-b258-5a39-a3d3-d5d45a5fed44", 00:15:18.104 "is_configured": true, 00:15:18.104 "data_offset": 2048, 00:15:18.104 "data_size": 63488 00:15:18.104 }, 00:15:18.104 { 00:15:18.104 "name": null, 00:15:18.104 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:18.104 "is_configured": false, 00:15:18.104 "data_offset": 2048, 00:15:18.104 "data_size": 63488 00:15:18.104 }, 00:15:18.104 { 00:15:18.104 "name": null, 00:15:18.104 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:18.104 "is_configured": false, 00:15:18.104 "data_offset": 2048, 00:15:18.104 "data_size": 63488 00:15:18.104 } 00:15:18.104 ] 00:15:18.104 }' 00:15:18.104 03:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.104 03:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.672 03:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:15:18.672 03:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:15:18.672 03:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:18.930 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:15:18.930 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:15:18.930 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:19.188 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:15:19.188 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:15:19.188 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=2 00:15:19.189 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:19.447 [2024-05-15 03:09:50.549142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:19.447 [2024-05-15 03:09:50.549186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.447 [2024-05-15 03:09:50.549205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273d230 00:15:19.447 [2024-05-15 03:09:50.549215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.447 [2024-05-15 03:09:50.549556] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.447 [2024-05-15 03:09:50.549571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:19.447 [2024-05-15 03:09:50.549628] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:19.447 [2024-05-15 03:09:50.549638] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:19.447 [2024-05-15 03:09:50.549645] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.447 [2024-05-15 03:09:50.549658] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x273a840 name raid_bdev1, state configuring 00:15:19.447 [2024-05-15 03:09:50.549684] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:19.447 pt3 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.447 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.705 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.705 "name": "raid_bdev1", 00:15:19.705 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:19.705 "strip_size_kb": 0, 00:15:19.705 "state": "configuring", 00:15:19.705 "raid_level": "raid1", 00:15:19.705 "superblock": true, 00:15:19.705 "num_base_bdevs": 3, 00:15:19.705 "num_base_bdevs_discovered": 1, 00:15:19.705 "num_base_bdevs_operational": 2, 00:15:19.705 "base_bdevs_list": [ 00:15:19.705 { 00:15:19.705 "name": null, 00:15:19.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.705 "is_configured": false, 00:15:19.705 "data_offset": 2048, 00:15:19.705 "data_size": 63488 00:15:19.705 }, 00:15:19.705 { 00:15:19.705 "name": null, 00:15:19.705 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:19.705 "is_configured": false, 00:15:19.705 "data_offset": 2048, 00:15:19.705 "data_size": 63488 00:15:19.705 }, 00:15:19.705 { 00:15:19.705 "name": "pt3", 00:15:19.705 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:19.705 "is_configured": true, 00:15:19.705 "data_offset": 2048, 00:15:19.705 "data_size": 63488 00:15:19.705 } 00:15:19.705 ] 00:15:19.705 }' 00:15:19.705 03:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.705 03:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.641 [2024-05-15 03:09:51.668145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.641 [2024-05-15 03:09:51.668196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.641 [2024-05-15 03:09:51.668215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273d700 00:15:20.641 [2024-05-15 03:09:51.668224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.641 [2024-05-15 03:09:51.668588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.641 [2024-05-15 03:09:51.668604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.641 [2024-05-15 03:09:51.668665] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:20.641 [2024-05-15 03:09:51.668681] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.641 [2024-05-15 03:09:51.668782] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x273c3b0 00:15:20.641 [2024-05-15 03:09:51.668791] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.641 [2024-05-15 03:09:51.668992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x273baa0 00:15:20.641 [2024-05-15 03:09:51.669136] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x273c3b0 00:15:20.641 [2024-05-15 03:09:51.669145] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x273c3b0 00:15:20.641 [2024-05-15 03:09:51.669249] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.641 pt2 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.641 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.900 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.900 "name": "raid_bdev1", 00:15:20.900 "uuid": "760bbd9e-6944-4fe8-8605-53d9b0100787", 00:15:20.900 "strip_size_kb": 0, 00:15:20.900 "state": "online", 00:15:20.900 "raid_level": "raid1", 00:15:20.900 "superblock": true, 00:15:20.900 "num_base_bdevs": 3, 00:15:20.900 "num_base_bdevs_discovered": 2, 00:15:20.900 "num_base_bdevs_operational": 2, 00:15:20.900 "base_bdevs_list": [ 00:15:20.900 { 00:15:20.900 "name": null, 00:15:20.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.900 "is_configured": false, 00:15:20.900 "data_offset": 2048, 00:15:20.900 "data_size": 63488 00:15:20.900 }, 00:15:20.900 { 00:15:20.900 "name": "pt2", 00:15:20.900 "uuid": "ade1da64-e325-5dbe-a20a-b972358a638f", 00:15:20.900 "is_configured": true, 00:15:20.900 "data_offset": 2048, 00:15:20.900 "data_size": 63488 00:15:20.900 }, 00:15:20.900 { 00:15:20.900 "name": "pt3", 00:15:20.900 "uuid": "1d0a44e6-5266-5619-b4f1-95dcc3bc2ac2", 00:15:20.900 "is_configured": true, 00:15:20.900 "data_offset": 2048, 00:15:20.900 "data_size": 63488 00:15:20.900 } 00:15:20.900 ] 00:15:20.900 }' 00:15:20.900 03:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.900 03:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.466 03:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:21.466 03:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:15:21.725 [2024-05-15 03:09:52.795397] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 760bbd9e-6944-4fe8-8605-53d9b0100787 '!=' 760bbd9e-6944-4fe8-8605-53d9b0100787 ']' 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4098456 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4098456 ']' 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4098456 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4098456 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4098456' 00:15:21.725 killing process with pid 4098456 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4098456 00:15:21.725 [2024-05-15 03:09:52.864094] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.725 [2024-05-15 03:09:52.864155] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.725 [2024-05-15 03:09:52.864211] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.725 [2024-05-15 03:09:52.864220] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x273c3b0 name raid_bdev1, state offline 00:15:21.725 03:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4098456 00:15:21.983 [2024-05-15 03:09:52.889529] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.983 03:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:15:21.983 00:15:21.983 real 0m23.639s 00:15:21.983 user 0m44.132s 00:15:21.983 sys 0m3.326s 00:15:21.983 03:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:21.983 03:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.983 ************************************ 00:15:21.983 END TEST raid_superblock_test 00:15:21.983 ************************************ 00:15:22.243 03:09:53 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:15:22.243 03:09:53 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:15:22.243 03:09:53 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:22.243 03:09:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:22.243 03:09:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:22.243 03:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.243 ************************************ 00:15:22.243 START TEST raid_state_function_test 00:15:22.243 ************************************ 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4102788 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4102788' 00:15:22.243 Process raid pid: 4102788 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4102788 /var/tmp/spdk-raid.sock 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4102788 ']' 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.243 03:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.243 [2024-05-15 03:09:53.246172] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:22.243 [2024-05-15 03:09:53.246227] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.243 [2024-05-15 03:09:53.342728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.502 [2024-05-15 03:09:53.437039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.502 [2024-05-15 03:09:53.501234] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.502 [2024-05-15 03:09:53.501265] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.067 03:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:23.067 03:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:23.067 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:23.325 [2024-05-15 03:09:54.428226] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.325 [2024-05-15 03:09:54.428265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.325 [2024-05-15 03:09:54.428274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.325 [2024-05-15 03:09:54.428283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.325 [2024-05-15 03:09:54.428291] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.325 [2024-05-15 03:09:54.428299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.325 [2024-05-15 03:09:54.428306] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.325 [2024-05-15 03:09:54.428314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.325 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.584 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.584 "name": "Existed_Raid", 00:15:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.584 "strip_size_kb": 64, 00:15:23.584 "state": "configuring", 00:15:23.584 "raid_level": "raid0", 00:15:23.584 "superblock": false, 00:15:23.584 "num_base_bdevs": 4, 00:15:23.584 "num_base_bdevs_discovered": 0, 00:15:23.584 "num_base_bdevs_operational": 4, 00:15:23.584 "base_bdevs_list": [ 00:15:23.584 { 00:15:23.584 "name": "BaseBdev1", 00:15:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.584 "is_configured": false, 00:15:23.584 "data_offset": 0, 00:15:23.584 "data_size": 0 00:15:23.584 }, 00:15:23.584 { 00:15:23.584 "name": "BaseBdev2", 00:15:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.584 "is_configured": false, 00:15:23.584 "data_offset": 0, 00:15:23.584 "data_size": 0 00:15:23.584 }, 00:15:23.584 { 00:15:23.584 "name": "BaseBdev3", 00:15:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.584 "is_configured": false, 00:15:23.584 "data_offset": 0, 00:15:23.584 "data_size": 0 00:15:23.584 }, 00:15:23.584 { 00:15:23.584 "name": "BaseBdev4", 00:15:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.584 "is_configured": false, 00:15:23.584 "data_offset": 0, 00:15:23.584 "data_size": 0 00:15:23.584 } 00:15:23.584 ] 00:15:23.584 }' 00:15:23.584 03:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.584 03:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.519 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.519 [2024-05-15 03:09:55.551078] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.519 [2024-05-15 03:09:55.551107] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23fee00 name Existed_Raid, state configuring 00:15:24.519 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:24.777 [2024-05-15 03:09:55.711526] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.777 [2024-05-15 03:09:55.711553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.777 [2024-05-15 03:09:55.711561] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.777 [2024-05-15 03:09:55.711569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.777 [2024-05-15 03:09:55.711577] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.777 [2024-05-15 03:09:55.711585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.777 [2024-05-15 03:09:55.711592] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.777 [2024-05-15 03:09:55.711599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.777 [2024-05-15 03:09:55.873506] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.777 BaseBdev1 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:24.777 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.036 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.294 [ 00:15:25.294 { 00:15:25.294 "name": "BaseBdev1", 00:15:25.294 "aliases": [ 00:15:25.294 "c8060c27-9f82-4a84-833b-fbdadeacd600" 00:15:25.294 ], 00:15:25.294 "product_name": "Malloc disk", 00:15:25.294 "block_size": 512, 00:15:25.294 "num_blocks": 65536, 00:15:25.294 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:25.294 "assigned_rate_limits": { 00:15:25.294 "rw_ios_per_sec": 0, 00:15:25.294 "rw_mbytes_per_sec": 0, 00:15:25.294 "r_mbytes_per_sec": 0, 00:15:25.294 "w_mbytes_per_sec": 0 00:15:25.294 }, 00:15:25.294 "claimed": true, 00:15:25.294 "claim_type": "exclusive_write", 00:15:25.294 "zoned": false, 00:15:25.294 "supported_io_types": { 00:15:25.294 "read": true, 00:15:25.294 "write": true, 00:15:25.294 "unmap": true, 00:15:25.294 "write_zeroes": true, 00:15:25.294 "flush": true, 00:15:25.294 "reset": true, 00:15:25.294 "compare": false, 00:15:25.294 "compare_and_write": false, 00:15:25.294 "abort": true, 00:15:25.294 "nvme_admin": false, 00:15:25.294 "nvme_io": false 00:15:25.294 }, 00:15:25.294 "memory_domains": [ 00:15:25.294 { 00:15:25.294 "dma_device_id": "system", 00:15:25.294 "dma_device_type": 1 00:15:25.294 }, 00:15:25.294 { 00:15:25.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.294 "dma_device_type": 2 00:15:25.294 } 00:15:25.294 ], 00:15:25.294 "driver_specific": {} 00:15:25.294 } 00:15:25.294 ] 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.294 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.295 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.295 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.553 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.553 "name": "Existed_Raid", 00:15:25.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.553 "strip_size_kb": 64, 00:15:25.553 "state": "configuring", 00:15:25.553 "raid_level": "raid0", 00:15:25.553 "superblock": false, 00:15:25.553 "num_base_bdevs": 4, 00:15:25.553 "num_base_bdevs_discovered": 1, 00:15:25.553 "num_base_bdevs_operational": 4, 00:15:25.553 "base_bdevs_list": [ 00:15:25.553 { 00:15:25.553 "name": "BaseBdev1", 00:15:25.553 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:25.553 "is_configured": true, 00:15:25.553 "data_offset": 0, 00:15:25.553 "data_size": 65536 00:15:25.553 }, 00:15:25.553 { 00:15:25.553 "name": "BaseBdev2", 00:15:25.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.553 "is_configured": false, 00:15:25.553 "data_offset": 0, 00:15:25.553 "data_size": 0 00:15:25.553 }, 00:15:25.553 { 00:15:25.553 "name": "BaseBdev3", 00:15:25.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.553 "is_configured": false, 00:15:25.553 "data_offset": 0, 00:15:25.553 "data_size": 0 00:15:25.553 }, 00:15:25.553 { 00:15:25.553 "name": "BaseBdev4", 00:15:25.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.553 "is_configured": false, 00:15:25.553 "data_offset": 0, 00:15:25.553 "data_size": 0 00:15:25.553 } 00:15:25.553 ] 00:15:25.553 }' 00:15:25.553 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.553 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.120 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.378 [2024-05-15 03:09:57.473805] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.378 [2024-05-15 03:09:57.473843] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23ff0a0 name Existed_Raid, state configuring 00:15:26.378 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:26.636 [2024-05-15 03:09:57.714480] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.636 [2024-05-15 03:09:57.715994] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.636 [2024-05-15 03:09:57.716030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.636 [2024-05-15 03:09:57.716039] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.637 [2024-05-15 03:09:57.716047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.637 [2024-05-15 03:09:57.716055] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.637 [2024-05-15 03:09:57.716063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.637 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.894 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.894 "name": "Existed_Raid", 00:15:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.894 "strip_size_kb": 64, 00:15:26.894 "state": "configuring", 00:15:26.894 "raid_level": "raid0", 00:15:26.894 "superblock": false, 00:15:26.894 "num_base_bdevs": 4, 00:15:26.894 "num_base_bdevs_discovered": 1, 00:15:26.894 "num_base_bdevs_operational": 4, 00:15:26.894 "base_bdevs_list": [ 00:15:26.894 { 00:15:26.894 "name": "BaseBdev1", 00:15:26.894 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:26.894 "is_configured": true, 00:15:26.894 "data_offset": 0, 00:15:26.894 "data_size": 65536 00:15:26.894 }, 00:15:26.894 { 00:15:26.894 "name": "BaseBdev2", 00:15:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.894 "is_configured": false, 00:15:26.894 "data_offset": 0, 00:15:26.894 "data_size": 0 00:15:26.894 }, 00:15:26.894 { 00:15:26.894 "name": "BaseBdev3", 00:15:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.894 "is_configured": false, 00:15:26.894 "data_offset": 0, 00:15:26.894 "data_size": 0 00:15:26.894 }, 00:15:26.894 { 00:15:26.894 "name": "BaseBdev4", 00:15:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.894 "is_configured": false, 00:15:26.894 "data_offset": 0, 00:15:26.894 "data_size": 0 00:15:26.894 } 00:15:26.894 ] 00:15:26.894 }' 00:15:26.894 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.894 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.459 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.718 [2024-05-15 03:09:58.848661] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.718 BaseBdev2 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:27.718 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.977 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.236 [ 00:15:28.236 { 00:15:28.236 "name": "BaseBdev2", 00:15:28.236 "aliases": [ 00:15:28.236 "1d3a3b83-2779-416e-bd68-4a2294bd3ce3" 00:15:28.236 ], 00:15:28.236 "product_name": "Malloc disk", 00:15:28.236 "block_size": 512, 00:15:28.236 "num_blocks": 65536, 00:15:28.236 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:28.236 "assigned_rate_limits": { 00:15:28.236 "rw_ios_per_sec": 0, 00:15:28.236 "rw_mbytes_per_sec": 0, 00:15:28.236 "r_mbytes_per_sec": 0, 00:15:28.236 "w_mbytes_per_sec": 0 00:15:28.236 }, 00:15:28.236 "claimed": true, 00:15:28.236 "claim_type": "exclusive_write", 00:15:28.236 "zoned": false, 00:15:28.236 "supported_io_types": { 00:15:28.236 "read": true, 00:15:28.236 "write": true, 00:15:28.236 "unmap": true, 00:15:28.236 "write_zeroes": true, 00:15:28.236 "flush": true, 00:15:28.236 "reset": true, 00:15:28.236 "compare": false, 00:15:28.236 "compare_and_write": false, 00:15:28.236 "abort": true, 00:15:28.236 "nvme_admin": false, 00:15:28.236 "nvme_io": false 00:15:28.236 }, 00:15:28.236 "memory_domains": [ 00:15:28.236 { 00:15:28.236 "dma_device_id": "system", 00:15:28.236 "dma_device_type": 1 00:15:28.236 }, 00:15:28.236 { 00:15:28.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.236 "dma_device_type": 2 00:15:28.236 } 00:15:28.236 ], 00:15:28.236 "driver_specific": {} 00:15:28.236 } 00:15:28.236 ] 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.236 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.495 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.495 "name": "Existed_Raid", 00:15:28.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.495 "strip_size_kb": 64, 00:15:28.495 "state": "configuring", 00:15:28.495 "raid_level": "raid0", 00:15:28.495 "superblock": false, 00:15:28.495 "num_base_bdevs": 4, 00:15:28.495 "num_base_bdevs_discovered": 2, 00:15:28.495 "num_base_bdevs_operational": 4, 00:15:28.495 "base_bdevs_list": [ 00:15:28.495 { 00:15:28.495 "name": "BaseBdev1", 00:15:28.495 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:28.495 "is_configured": true, 00:15:28.495 "data_offset": 0, 00:15:28.495 "data_size": 65536 00:15:28.495 }, 00:15:28.495 { 00:15:28.495 "name": "BaseBdev2", 00:15:28.495 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:28.495 "is_configured": true, 00:15:28.495 "data_offset": 0, 00:15:28.495 "data_size": 65536 00:15:28.495 }, 00:15:28.495 { 00:15:28.495 "name": "BaseBdev3", 00:15:28.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.495 "is_configured": false, 00:15:28.495 "data_offset": 0, 00:15:28.495 "data_size": 0 00:15:28.495 }, 00:15:28.495 { 00:15:28.495 "name": "BaseBdev4", 00:15:28.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.495 "is_configured": false, 00:15:28.495 "data_offset": 0, 00:15:28.495 "data_size": 0 00:15:28.495 } 00:15:28.495 ] 00:15:28.495 }' 00:15:28.495 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.495 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.136 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.394 [2024-05-15 03:10:00.468286] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.394 BaseBdev3 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:29.394 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.652 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.910 [ 00:15:29.910 { 00:15:29.910 "name": "BaseBdev3", 00:15:29.910 "aliases": [ 00:15:29.910 "0c4e216b-e574-42f9-ac0f-c8c60955ecd9" 00:15:29.910 ], 00:15:29.910 "product_name": "Malloc disk", 00:15:29.910 "block_size": 512, 00:15:29.910 "num_blocks": 65536, 00:15:29.910 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:29.910 "assigned_rate_limits": { 00:15:29.910 "rw_ios_per_sec": 0, 00:15:29.910 "rw_mbytes_per_sec": 0, 00:15:29.910 "r_mbytes_per_sec": 0, 00:15:29.910 "w_mbytes_per_sec": 0 00:15:29.910 }, 00:15:29.910 "claimed": true, 00:15:29.910 "claim_type": "exclusive_write", 00:15:29.910 "zoned": false, 00:15:29.910 "supported_io_types": { 00:15:29.910 "read": true, 00:15:29.910 "write": true, 00:15:29.910 "unmap": true, 00:15:29.910 "write_zeroes": true, 00:15:29.910 "flush": true, 00:15:29.910 "reset": true, 00:15:29.910 "compare": false, 00:15:29.910 "compare_and_write": false, 00:15:29.910 "abort": true, 00:15:29.910 "nvme_admin": false, 00:15:29.910 "nvme_io": false 00:15:29.910 }, 00:15:29.910 "memory_domains": [ 00:15:29.910 { 00:15:29.910 "dma_device_id": "system", 00:15:29.910 "dma_device_type": 1 00:15:29.910 }, 00:15:29.910 { 00:15:29.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.910 "dma_device_type": 2 00:15:29.910 } 00:15:29.910 ], 00:15:29.910 "driver_specific": {} 00:15:29.910 } 00:15:29.910 ] 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.910 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.168 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.168 "name": "Existed_Raid", 00:15:30.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.168 "strip_size_kb": 64, 00:15:30.168 "state": "configuring", 00:15:30.168 "raid_level": "raid0", 00:15:30.168 "superblock": false, 00:15:30.168 "num_base_bdevs": 4, 00:15:30.168 "num_base_bdevs_discovered": 3, 00:15:30.168 "num_base_bdevs_operational": 4, 00:15:30.168 "base_bdevs_list": [ 00:15:30.168 { 00:15:30.168 "name": "BaseBdev1", 00:15:30.168 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:30.168 "is_configured": true, 00:15:30.168 "data_offset": 0, 00:15:30.168 "data_size": 65536 00:15:30.168 }, 00:15:30.168 { 00:15:30.168 "name": "BaseBdev2", 00:15:30.168 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:30.168 "is_configured": true, 00:15:30.168 "data_offset": 0, 00:15:30.168 "data_size": 65536 00:15:30.168 }, 00:15:30.168 { 00:15:30.168 "name": "BaseBdev3", 00:15:30.168 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:30.168 "is_configured": true, 00:15:30.168 "data_offset": 0, 00:15:30.168 "data_size": 65536 00:15:30.168 }, 00:15:30.168 { 00:15:30.168 "name": "BaseBdev4", 00:15:30.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.168 "is_configured": false, 00:15:30.168 "data_offset": 0, 00:15:30.168 "data_size": 0 00:15:30.168 } 00:15:30.168 ] 00:15:30.168 }' 00:15:30.168 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.168 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.734 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:30.992 [2024-05-15 03:10:02.124049] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.992 [2024-05-15 03:10:02.124081] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x23fe670 00:15:30.992 [2024-05-15 03:10:02.124088] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:30.992 [2024-05-15 03:10:02.124285] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2400a30 00:15:30.992 [2024-05-15 03:10:02.124416] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23fe670 00:15:30.992 [2024-05-15 03:10:02.124425] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23fe670 00:15:30.992 [2024-05-15 03:10:02.124588] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.992 BaseBdev4 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:30.992 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:30.993 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.249 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:31.507 [ 00:15:31.507 { 00:15:31.507 "name": "BaseBdev4", 00:15:31.507 "aliases": [ 00:15:31.507 "23f8367e-5c0f-4f5d-b072-e0f3864eb209" 00:15:31.507 ], 00:15:31.507 "product_name": "Malloc disk", 00:15:31.507 "block_size": 512, 00:15:31.507 "num_blocks": 65536, 00:15:31.507 "uuid": "23f8367e-5c0f-4f5d-b072-e0f3864eb209", 00:15:31.507 "assigned_rate_limits": { 00:15:31.507 "rw_ios_per_sec": 0, 00:15:31.507 "rw_mbytes_per_sec": 0, 00:15:31.507 "r_mbytes_per_sec": 0, 00:15:31.507 "w_mbytes_per_sec": 0 00:15:31.507 }, 00:15:31.507 "claimed": true, 00:15:31.507 "claim_type": "exclusive_write", 00:15:31.507 "zoned": false, 00:15:31.507 "supported_io_types": { 00:15:31.507 "read": true, 00:15:31.507 "write": true, 00:15:31.507 "unmap": true, 00:15:31.507 "write_zeroes": true, 00:15:31.507 "flush": true, 00:15:31.507 "reset": true, 00:15:31.507 "compare": false, 00:15:31.507 "compare_and_write": false, 00:15:31.507 "abort": true, 00:15:31.507 "nvme_admin": false, 00:15:31.507 "nvme_io": false 00:15:31.507 }, 00:15:31.507 "memory_domains": [ 00:15:31.507 { 00:15:31.507 "dma_device_id": "system", 00:15:31.507 "dma_device_type": 1 00:15:31.507 }, 00:15:31.507 { 00:15:31.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.507 "dma_device_type": 2 00:15:31.507 } 00:15:31.507 ], 00:15:31.507 "driver_specific": {} 00:15:31.507 } 00:15:31.507 ] 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.507 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.508 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.766 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.766 "name": "Existed_Raid", 00:15:31.766 "uuid": "59f34446-e87f-49d2-a468-8ca0076f1c90", 00:15:31.766 "strip_size_kb": 64, 00:15:31.766 "state": "online", 00:15:31.766 "raid_level": "raid0", 00:15:31.766 "superblock": false, 00:15:31.766 "num_base_bdevs": 4, 00:15:31.766 "num_base_bdevs_discovered": 4, 00:15:31.766 "num_base_bdevs_operational": 4, 00:15:31.766 "base_bdevs_list": [ 00:15:31.766 { 00:15:31.766 "name": "BaseBdev1", 00:15:31.766 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:31.766 "is_configured": true, 00:15:31.766 "data_offset": 0, 00:15:31.766 "data_size": 65536 00:15:31.766 }, 00:15:31.766 { 00:15:31.766 "name": "BaseBdev2", 00:15:31.766 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:31.766 "is_configured": true, 00:15:31.766 "data_offset": 0, 00:15:31.766 "data_size": 65536 00:15:31.766 }, 00:15:31.766 { 00:15:31.766 "name": "BaseBdev3", 00:15:31.766 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:31.766 "is_configured": true, 00:15:31.766 "data_offset": 0, 00:15:31.766 "data_size": 65536 00:15:31.766 }, 00:15:31.766 { 00:15:31.766 "name": "BaseBdev4", 00:15:31.766 "uuid": "23f8367e-5c0f-4f5d-b072-e0f3864eb209", 00:15:31.766 "is_configured": true, 00:15:31.766 "data_offset": 0, 00:15:31.766 "data_size": 65536 00:15:31.766 } 00:15:31.766 ] 00:15:31.766 }' 00:15:31.766 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.766 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:32.703 [2024-05-15 03:10:03.744739] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:32.703 "name": "Existed_Raid", 00:15:32.703 "aliases": [ 00:15:32.703 "59f34446-e87f-49d2-a468-8ca0076f1c90" 00:15:32.703 ], 00:15:32.703 "product_name": "Raid Volume", 00:15:32.703 "block_size": 512, 00:15:32.703 "num_blocks": 262144, 00:15:32.703 "uuid": "59f34446-e87f-49d2-a468-8ca0076f1c90", 00:15:32.703 "assigned_rate_limits": { 00:15:32.703 "rw_ios_per_sec": 0, 00:15:32.703 "rw_mbytes_per_sec": 0, 00:15:32.703 "r_mbytes_per_sec": 0, 00:15:32.703 "w_mbytes_per_sec": 0 00:15:32.703 }, 00:15:32.703 "claimed": false, 00:15:32.703 "zoned": false, 00:15:32.703 "supported_io_types": { 00:15:32.703 "read": true, 00:15:32.703 "write": true, 00:15:32.703 "unmap": true, 00:15:32.703 "write_zeroes": true, 00:15:32.703 "flush": true, 00:15:32.703 "reset": true, 00:15:32.703 "compare": false, 00:15:32.703 "compare_and_write": false, 00:15:32.703 "abort": false, 00:15:32.703 "nvme_admin": false, 00:15:32.703 "nvme_io": false 00:15:32.703 }, 00:15:32.703 "memory_domains": [ 00:15:32.703 { 00:15:32.703 "dma_device_id": "system", 00:15:32.703 "dma_device_type": 1 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.703 "dma_device_type": 2 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "system", 00:15:32.703 "dma_device_type": 1 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.703 "dma_device_type": 2 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "system", 00:15:32.703 "dma_device_type": 1 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.703 "dma_device_type": 2 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "system", 00:15:32.703 "dma_device_type": 1 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.703 "dma_device_type": 2 00:15:32.703 } 00:15:32.703 ], 00:15:32.703 "driver_specific": { 00:15:32.703 "raid": { 00:15:32.703 "uuid": "59f34446-e87f-49d2-a468-8ca0076f1c90", 00:15:32.703 "strip_size_kb": 64, 00:15:32.703 "state": "online", 00:15:32.703 "raid_level": "raid0", 00:15:32.703 "superblock": false, 00:15:32.703 "num_base_bdevs": 4, 00:15:32.703 "num_base_bdevs_discovered": 4, 00:15:32.703 "num_base_bdevs_operational": 4, 00:15:32.703 "base_bdevs_list": [ 00:15:32.703 { 00:15:32.703 "name": "BaseBdev1", 00:15:32.703 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:32.703 "is_configured": true, 00:15:32.703 "data_offset": 0, 00:15:32.703 "data_size": 65536 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "name": "BaseBdev2", 00:15:32.703 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:32.703 "is_configured": true, 00:15:32.703 "data_offset": 0, 00:15:32.703 "data_size": 65536 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "name": "BaseBdev3", 00:15:32.703 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:32.703 "is_configured": true, 00:15:32.703 "data_offset": 0, 00:15:32.703 "data_size": 65536 00:15:32.703 }, 00:15:32.703 { 00:15:32.703 "name": "BaseBdev4", 00:15:32.703 "uuid": "23f8367e-5c0f-4f5d-b072-e0f3864eb209", 00:15:32.703 "is_configured": true, 00:15:32.703 "data_offset": 0, 00:15:32.703 "data_size": 65536 00:15:32.703 } 00:15:32.703 ] 00:15:32.703 } 00:15:32.703 } 00:15:32.703 }' 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:32.703 BaseBdev2 00:15:32.703 BaseBdev3 00:15:32.703 BaseBdev4' 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:32.703 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:32.961 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:32.961 "name": "BaseBdev1", 00:15:32.961 "aliases": [ 00:15:32.961 "c8060c27-9f82-4a84-833b-fbdadeacd600" 00:15:32.961 ], 00:15:32.961 "product_name": "Malloc disk", 00:15:32.961 "block_size": 512, 00:15:32.961 "num_blocks": 65536, 00:15:32.961 "uuid": "c8060c27-9f82-4a84-833b-fbdadeacd600", 00:15:32.961 "assigned_rate_limits": { 00:15:32.961 "rw_ios_per_sec": 0, 00:15:32.961 "rw_mbytes_per_sec": 0, 00:15:32.961 "r_mbytes_per_sec": 0, 00:15:32.961 "w_mbytes_per_sec": 0 00:15:32.961 }, 00:15:32.961 "claimed": true, 00:15:32.961 "claim_type": "exclusive_write", 00:15:32.961 "zoned": false, 00:15:32.961 "supported_io_types": { 00:15:32.961 "read": true, 00:15:32.961 "write": true, 00:15:32.961 "unmap": true, 00:15:32.961 "write_zeroes": true, 00:15:32.961 "flush": true, 00:15:32.961 "reset": true, 00:15:32.961 "compare": false, 00:15:32.961 "compare_and_write": false, 00:15:32.961 "abort": true, 00:15:32.961 "nvme_admin": false, 00:15:32.961 "nvme_io": false 00:15:32.961 }, 00:15:32.961 "memory_domains": [ 00:15:32.961 { 00:15:32.961 "dma_device_id": "system", 00:15:32.961 "dma_device_type": 1 00:15:32.961 }, 00:15:32.961 { 00:15:32.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.961 "dma_device_type": 2 00:15:32.962 } 00:15:32.962 ], 00:15:32.962 "driver_specific": {} 00:15:32.962 }' 00:15:32.962 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.962 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.962 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:32.962 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.962 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:33.220 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:33.478 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:33.478 "name": "BaseBdev2", 00:15:33.478 "aliases": [ 00:15:33.478 "1d3a3b83-2779-416e-bd68-4a2294bd3ce3" 00:15:33.478 ], 00:15:33.478 "product_name": "Malloc disk", 00:15:33.478 "block_size": 512, 00:15:33.478 "num_blocks": 65536, 00:15:33.478 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:33.478 "assigned_rate_limits": { 00:15:33.478 "rw_ios_per_sec": 0, 00:15:33.478 "rw_mbytes_per_sec": 0, 00:15:33.478 "r_mbytes_per_sec": 0, 00:15:33.478 "w_mbytes_per_sec": 0 00:15:33.478 }, 00:15:33.478 "claimed": true, 00:15:33.478 "claim_type": "exclusive_write", 00:15:33.478 "zoned": false, 00:15:33.478 "supported_io_types": { 00:15:33.478 "read": true, 00:15:33.478 "write": true, 00:15:33.478 "unmap": true, 00:15:33.478 "write_zeroes": true, 00:15:33.478 "flush": true, 00:15:33.478 "reset": true, 00:15:33.478 "compare": false, 00:15:33.478 "compare_and_write": false, 00:15:33.478 "abort": true, 00:15:33.478 "nvme_admin": false, 00:15:33.478 "nvme_io": false 00:15:33.478 }, 00:15:33.478 "memory_domains": [ 00:15:33.478 { 00:15:33.478 "dma_device_id": "system", 00:15:33.478 "dma_device_type": 1 00:15:33.478 }, 00:15:33.478 { 00:15:33.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.478 "dma_device_type": 2 00:15:33.478 } 00:15:33.478 ], 00:15:33.478 "driver_specific": {} 00:15:33.478 }' 00:15:33.478 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:33.478 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:33.737 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:33.996 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:33.996 "name": "BaseBdev3", 00:15:33.996 "aliases": [ 00:15:33.996 "0c4e216b-e574-42f9-ac0f-c8c60955ecd9" 00:15:33.996 ], 00:15:33.996 "product_name": "Malloc disk", 00:15:33.996 "block_size": 512, 00:15:33.996 "num_blocks": 65536, 00:15:33.996 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:33.996 "assigned_rate_limits": { 00:15:33.996 "rw_ios_per_sec": 0, 00:15:33.996 "rw_mbytes_per_sec": 0, 00:15:33.996 "r_mbytes_per_sec": 0, 00:15:33.996 "w_mbytes_per_sec": 0 00:15:33.996 }, 00:15:33.996 "claimed": true, 00:15:33.996 "claim_type": "exclusive_write", 00:15:33.996 "zoned": false, 00:15:33.996 "supported_io_types": { 00:15:33.996 "read": true, 00:15:33.996 "write": true, 00:15:33.996 "unmap": true, 00:15:33.996 "write_zeroes": true, 00:15:33.996 "flush": true, 00:15:33.996 "reset": true, 00:15:33.996 "compare": false, 00:15:33.996 "compare_and_write": false, 00:15:33.996 "abort": true, 00:15:33.996 "nvme_admin": false, 00:15:33.996 "nvme_io": false 00:15:33.996 }, 00:15:33.996 "memory_domains": [ 00:15:33.996 { 00:15:33.996 "dma_device_id": "system", 00:15:33.996 "dma_device_type": 1 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.996 "dma_device_type": 2 00:15:33.996 } 00:15:33.996 ], 00:15:33.996 "driver_specific": {} 00:15:33.996 }' 00:15:33.996 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.253 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:34.511 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:34.511 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:34.511 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:34.511 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:34.511 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:34.769 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:34.769 "name": "BaseBdev4", 00:15:34.769 "aliases": [ 00:15:34.769 "23f8367e-5c0f-4f5d-b072-e0f3864eb209" 00:15:34.769 ], 00:15:34.769 "product_name": "Malloc disk", 00:15:34.769 "block_size": 512, 00:15:34.769 "num_blocks": 65536, 00:15:34.769 "uuid": "23f8367e-5c0f-4f5d-b072-e0f3864eb209", 00:15:34.769 "assigned_rate_limits": { 00:15:34.769 "rw_ios_per_sec": 0, 00:15:34.769 "rw_mbytes_per_sec": 0, 00:15:34.769 "r_mbytes_per_sec": 0, 00:15:34.769 "w_mbytes_per_sec": 0 00:15:34.769 }, 00:15:34.769 "claimed": true, 00:15:34.769 "claim_type": "exclusive_write", 00:15:34.769 "zoned": false, 00:15:34.769 "supported_io_types": { 00:15:34.769 "read": true, 00:15:34.769 "write": true, 00:15:34.769 "unmap": true, 00:15:34.769 "write_zeroes": true, 00:15:34.769 "flush": true, 00:15:34.769 "reset": true, 00:15:34.769 "compare": false, 00:15:34.769 "compare_and_write": false, 00:15:34.769 "abort": true, 00:15:34.769 "nvme_admin": false, 00:15:34.769 "nvme_io": false 00:15:34.769 }, 00:15:34.769 "memory_domains": [ 00:15:34.769 { 00:15:34.769 "dma_device_id": "system", 00:15:34.769 "dma_device_type": 1 00:15:34.769 }, 00:15:34.769 { 00:15:34.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.769 "dma_device_type": 2 00:15:34.769 } 00:15:34.769 ], 00:15:34.769 "driver_specific": {} 00:15:34.769 }' 00:15:34.769 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:34.769 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:34.770 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:34.770 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:34.770 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:34.770 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.770 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:35.027 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:35.027 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.027 03:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:35.027 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:35.027 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:35.027 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.286 [2024-05-15 03:10:06.303326] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.286 [2024-05-15 03:10:06.303350] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.286 [2024-05-15 03:10:06.303394] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.286 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.545 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.545 "name": "Existed_Raid", 00:15:35.545 "uuid": "59f34446-e87f-49d2-a468-8ca0076f1c90", 00:15:35.545 "strip_size_kb": 64, 00:15:35.545 "state": "offline", 00:15:35.545 "raid_level": "raid0", 00:15:35.545 "superblock": false, 00:15:35.545 "num_base_bdevs": 4, 00:15:35.545 "num_base_bdevs_discovered": 3, 00:15:35.545 "num_base_bdevs_operational": 3, 00:15:35.545 "base_bdevs_list": [ 00:15:35.545 { 00:15:35.545 "name": null, 00:15:35.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.545 "is_configured": false, 00:15:35.545 "data_offset": 0, 00:15:35.545 "data_size": 65536 00:15:35.545 }, 00:15:35.545 { 00:15:35.545 "name": "BaseBdev2", 00:15:35.545 "uuid": "1d3a3b83-2779-416e-bd68-4a2294bd3ce3", 00:15:35.545 "is_configured": true, 00:15:35.545 "data_offset": 0, 00:15:35.545 "data_size": 65536 00:15:35.545 }, 00:15:35.545 { 00:15:35.545 "name": "BaseBdev3", 00:15:35.545 "uuid": "0c4e216b-e574-42f9-ac0f-c8c60955ecd9", 00:15:35.545 "is_configured": true, 00:15:35.545 "data_offset": 0, 00:15:35.545 "data_size": 65536 00:15:35.545 }, 00:15:35.545 { 00:15:35.545 "name": "BaseBdev4", 00:15:35.545 "uuid": "23f8367e-5c0f-4f5d-b072-e0f3864eb209", 00:15:35.545 "is_configured": true, 00:15:35.545 "data_offset": 0, 00:15:35.545 "data_size": 65536 00:15:35.545 } 00:15:35.545 ] 00:15:35.545 }' 00:15:35.545 03:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.545 03:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.111 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:36.111 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:36.111 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.111 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:36.370 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:36.370 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.370 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:36.628 [2024-05-15 03:10:07.700173] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.628 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:36.628 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:36.628 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.628 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:36.886 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:36.886 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.886 03:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:37.144 [2024-05-15 03:10:08.216059] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.144 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.144 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.144 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.144 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:37.402 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:37.402 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.402 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:37.660 [2024-05-15 03:10:08.723747] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:37.660 [2024-05-15 03:10:08.723785] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23fe670 name Existed_Raid, state offline 00:15:37.660 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.660 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.660 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.660 03:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:37.917 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.175 BaseBdev2 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:38.175 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.433 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.690 [ 00:15:38.690 { 00:15:38.690 "name": "BaseBdev2", 00:15:38.690 "aliases": [ 00:15:38.690 "909969e8-443e-4bd5-821e-6728706316bc" 00:15:38.690 ], 00:15:38.690 "product_name": "Malloc disk", 00:15:38.691 "block_size": 512, 00:15:38.691 "num_blocks": 65536, 00:15:38.691 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:38.691 "assigned_rate_limits": { 00:15:38.691 "rw_ios_per_sec": 0, 00:15:38.691 "rw_mbytes_per_sec": 0, 00:15:38.691 "r_mbytes_per_sec": 0, 00:15:38.691 "w_mbytes_per_sec": 0 00:15:38.691 }, 00:15:38.691 "claimed": false, 00:15:38.691 "zoned": false, 00:15:38.691 "supported_io_types": { 00:15:38.691 "read": true, 00:15:38.691 "write": true, 00:15:38.691 "unmap": true, 00:15:38.691 "write_zeroes": true, 00:15:38.691 "flush": true, 00:15:38.691 "reset": true, 00:15:38.691 "compare": false, 00:15:38.691 "compare_and_write": false, 00:15:38.691 "abort": true, 00:15:38.691 "nvme_admin": false, 00:15:38.691 "nvme_io": false 00:15:38.691 }, 00:15:38.691 "memory_domains": [ 00:15:38.691 { 00:15:38.691 "dma_device_id": "system", 00:15:38.691 "dma_device_type": 1 00:15:38.691 }, 00:15:38.691 { 00:15:38.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.691 "dma_device_type": 2 00:15:38.691 } 00:15:38.691 ], 00:15:38.691 "driver_specific": {} 00:15:38.691 } 00:15:38.691 ] 00:15:38.691 03:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:38.691 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:38.691 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:38.691 03:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.948 BaseBdev3 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:38.948 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.206 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.464 [ 00:15:39.464 { 00:15:39.464 "name": "BaseBdev3", 00:15:39.464 "aliases": [ 00:15:39.464 "ce720030-4e0c-441a-a472-080dc9ce8718" 00:15:39.464 ], 00:15:39.464 "product_name": "Malloc disk", 00:15:39.464 "block_size": 512, 00:15:39.464 "num_blocks": 65536, 00:15:39.464 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:39.464 "assigned_rate_limits": { 00:15:39.464 "rw_ios_per_sec": 0, 00:15:39.464 "rw_mbytes_per_sec": 0, 00:15:39.464 "r_mbytes_per_sec": 0, 00:15:39.464 "w_mbytes_per_sec": 0 00:15:39.464 }, 00:15:39.464 "claimed": false, 00:15:39.464 "zoned": false, 00:15:39.464 "supported_io_types": { 00:15:39.464 "read": true, 00:15:39.464 "write": true, 00:15:39.464 "unmap": true, 00:15:39.464 "write_zeroes": true, 00:15:39.464 "flush": true, 00:15:39.464 "reset": true, 00:15:39.464 "compare": false, 00:15:39.464 "compare_and_write": false, 00:15:39.464 "abort": true, 00:15:39.464 "nvme_admin": false, 00:15:39.464 "nvme_io": false 00:15:39.464 }, 00:15:39.464 "memory_domains": [ 00:15:39.464 { 00:15:39.464 "dma_device_id": "system", 00:15:39.464 "dma_device_type": 1 00:15:39.464 }, 00:15:39.464 { 00:15:39.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.464 "dma_device_type": 2 00:15:39.464 } 00:15:39.464 ], 00:15:39.464 "driver_specific": {} 00:15:39.464 } 00:15:39.464 ] 00:15:39.464 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:39.464 03:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:39.464 03:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:39.464 03:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.722 BaseBdev4 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:39.722 03:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.982 03:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:40.240 [ 00:15:40.240 { 00:15:40.240 "name": "BaseBdev4", 00:15:40.240 "aliases": [ 00:15:40.240 "1689d804-3517-46d3-ace1-34d0b1a347a3" 00:15:40.240 ], 00:15:40.240 "product_name": "Malloc disk", 00:15:40.240 "block_size": 512, 00:15:40.240 "num_blocks": 65536, 00:15:40.240 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:40.240 "assigned_rate_limits": { 00:15:40.240 "rw_ios_per_sec": 0, 00:15:40.240 "rw_mbytes_per_sec": 0, 00:15:40.240 "r_mbytes_per_sec": 0, 00:15:40.240 "w_mbytes_per_sec": 0 00:15:40.240 }, 00:15:40.240 "claimed": false, 00:15:40.240 "zoned": false, 00:15:40.240 "supported_io_types": { 00:15:40.240 "read": true, 00:15:40.240 "write": true, 00:15:40.240 "unmap": true, 00:15:40.240 "write_zeroes": true, 00:15:40.240 "flush": true, 00:15:40.240 "reset": true, 00:15:40.240 "compare": false, 00:15:40.240 "compare_and_write": false, 00:15:40.240 "abort": true, 00:15:40.240 "nvme_admin": false, 00:15:40.240 "nvme_io": false 00:15:40.240 }, 00:15:40.240 "memory_domains": [ 00:15:40.240 { 00:15:40.240 "dma_device_id": "system", 00:15:40.240 "dma_device_type": 1 00:15:40.240 }, 00:15:40.240 { 00:15:40.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.240 "dma_device_type": 2 00:15:40.240 } 00:15:40.240 ], 00:15:40.240 "driver_specific": {} 00:15:40.240 } 00:15:40.240 ] 00:15:40.240 03:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:40.240 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:40.240 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:40.240 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:40.498 [2024-05-15 03:10:11.530685] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.498 [2024-05-15 03:10:11.530723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.498 [2024-05-15 03:10:11.530741] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.498 [2024-05-15 03:10:11.532136] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.498 [2024-05-15 03:10:11.532177] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.498 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.756 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.756 "name": "Existed_Raid", 00:15:40.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.756 "strip_size_kb": 64, 00:15:40.756 "state": "configuring", 00:15:40.756 "raid_level": "raid0", 00:15:40.756 "superblock": false, 00:15:40.756 "num_base_bdevs": 4, 00:15:40.756 "num_base_bdevs_discovered": 3, 00:15:40.756 "num_base_bdevs_operational": 4, 00:15:40.756 "base_bdevs_list": [ 00:15:40.756 { 00:15:40.756 "name": "BaseBdev1", 00:15:40.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.756 "is_configured": false, 00:15:40.756 "data_offset": 0, 00:15:40.756 "data_size": 0 00:15:40.756 }, 00:15:40.756 { 00:15:40.756 "name": "BaseBdev2", 00:15:40.756 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:40.756 "is_configured": true, 00:15:40.756 "data_offset": 0, 00:15:40.756 "data_size": 65536 00:15:40.756 }, 00:15:40.756 { 00:15:40.756 "name": "BaseBdev3", 00:15:40.756 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:40.756 "is_configured": true, 00:15:40.756 "data_offset": 0, 00:15:40.756 "data_size": 65536 00:15:40.756 }, 00:15:40.756 { 00:15:40.756 "name": "BaseBdev4", 00:15:40.756 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:40.756 "is_configured": true, 00:15:40.756 "data_offset": 0, 00:15:40.756 "data_size": 65536 00:15:40.756 } 00:15:40.756 ] 00:15:40.756 }' 00:15:40.756 03:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.756 03:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.321 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:41.578 [2024-05-15 03:10:12.665698] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.578 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.836 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.836 "name": "Existed_Raid", 00:15:41.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.836 "strip_size_kb": 64, 00:15:41.836 "state": "configuring", 00:15:41.836 "raid_level": "raid0", 00:15:41.836 "superblock": false, 00:15:41.836 "num_base_bdevs": 4, 00:15:41.836 "num_base_bdevs_discovered": 2, 00:15:41.836 "num_base_bdevs_operational": 4, 00:15:41.836 "base_bdevs_list": [ 00:15:41.836 { 00:15:41.836 "name": "BaseBdev1", 00:15:41.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.836 "is_configured": false, 00:15:41.836 "data_offset": 0, 00:15:41.836 "data_size": 0 00:15:41.836 }, 00:15:41.836 { 00:15:41.836 "name": null, 00:15:41.836 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:41.836 "is_configured": false, 00:15:41.836 "data_offset": 0, 00:15:41.836 "data_size": 65536 00:15:41.836 }, 00:15:41.836 { 00:15:41.836 "name": "BaseBdev3", 00:15:41.836 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:41.836 "is_configured": true, 00:15:41.836 "data_offset": 0, 00:15:41.836 "data_size": 65536 00:15:41.836 }, 00:15:41.836 { 00:15:41.836 "name": "BaseBdev4", 00:15:41.836 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:41.836 "is_configured": true, 00:15:41.836 "data_offset": 0, 00:15:41.836 "data_size": 65536 00:15:41.836 } 00:15:41.836 ] 00:15:41.836 }' 00:15:41.836 03:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.836 03:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.406 03:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.406 03:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.663 03:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:42.663 03:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.922 [2024-05-15 03:10:14.052563] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.922 BaseBdev1 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:42.922 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.180 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.437 [ 00:15:43.437 { 00:15:43.437 "name": "BaseBdev1", 00:15:43.437 "aliases": [ 00:15:43.437 "1437c03b-e061-4fb0-b8d0-87db05015dec" 00:15:43.437 ], 00:15:43.437 "product_name": "Malloc disk", 00:15:43.437 "block_size": 512, 00:15:43.437 "num_blocks": 65536, 00:15:43.437 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:43.437 "assigned_rate_limits": { 00:15:43.437 "rw_ios_per_sec": 0, 00:15:43.437 "rw_mbytes_per_sec": 0, 00:15:43.437 "r_mbytes_per_sec": 0, 00:15:43.437 "w_mbytes_per_sec": 0 00:15:43.437 }, 00:15:43.437 "claimed": true, 00:15:43.437 "claim_type": "exclusive_write", 00:15:43.437 "zoned": false, 00:15:43.437 "supported_io_types": { 00:15:43.437 "read": true, 00:15:43.437 "write": true, 00:15:43.437 "unmap": true, 00:15:43.437 "write_zeroes": true, 00:15:43.437 "flush": true, 00:15:43.437 "reset": true, 00:15:43.437 "compare": false, 00:15:43.437 "compare_and_write": false, 00:15:43.437 "abort": true, 00:15:43.437 "nvme_admin": false, 00:15:43.437 "nvme_io": false 00:15:43.437 }, 00:15:43.437 "memory_domains": [ 00:15:43.437 { 00:15:43.437 "dma_device_id": "system", 00:15:43.438 "dma_device_type": 1 00:15:43.438 }, 00:15:43.438 { 00:15:43.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.438 "dma_device_type": 2 00:15:43.438 } 00:15:43.438 ], 00:15:43.438 "driver_specific": {} 00:15:43.438 } 00:15:43.438 ] 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.438 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.695 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.695 "name": "Existed_Raid", 00:15:43.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.695 "strip_size_kb": 64, 00:15:43.695 "state": "configuring", 00:15:43.695 "raid_level": "raid0", 00:15:43.695 "superblock": false, 00:15:43.695 "num_base_bdevs": 4, 00:15:43.695 "num_base_bdevs_discovered": 3, 00:15:43.695 "num_base_bdevs_operational": 4, 00:15:43.695 "base_bdevs_list": [ 00:15:43.695 { 00:15:43.695 "name": "BaseBdev1", 00:15:43.695 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:43.695 "is_configured": true, 00:15:43.695 "data_offset": 0, 00:15:43.695 "data_size": 65536 00:15:43.695 }, 00:15:43.695 { 00:15:43.695 "name": null, 00:15:43.695 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:43.695 "is_configured": false, 00:15:43.695 "data_offset": 0, 00:15:43.695 "data_size": 65536 00:15:43.695 }, 00:15:43.695 { 00:15:43.695 "name": "BaseBdev3", 00:15:43.695 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:43.695 "is_configured": true, 00:15:43.695 "data_offset": 0, 00:15:43.695 "data_size": 65536 00:15:43.695 }, 00:15:43.695 { 00:15:43.695 "name": "BaseBdev4", 00:15:43.695 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:43.695 "is_configured": true, 00:15:43.695 "data_offset": 0, 00:15:43.695 "data_size": 65536 00:15:43.695 } 00:15:43.695 ] 00:15:43.695 }' 00:15:43.695 03:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.695 03:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.627 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.627 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.627 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:44.627 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:44.884 [2024-05-15 03:10:15.929633] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.884 03:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.141 03:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.141 "name": "Existed_Raid", 00:15:45.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.141 "strip_size_kb": 64, 00:15:45.141 "state": "configuring", 00:15:45.141 "raid_level": "raid0", 00:15:45.141 "superblock": false, 00:15:45.141 "num_base_bdevs": 4, 00:15:45.141 "num_base_bdevs_discovered": 2, 00:15:45.141 "num_base_bdevs_operational": 4, 00:15:45.141 "base_bdevs_list": [ 00:15:45.141 { 00:15:45.141 "name": "BaseBdev1", 00:15:45.141 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:45.141 "is_configured": true, 00:15:45.141 "data_offset": 0, 00:15:45.141 "data_size": 65536 00:15:45.141 }, 00:15:45.141 { 00:15:45.141 "name": null, 00:15:45.141 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:45.141 "is_configured": false, 00:15:45.141 "data_offset": 0, 00:15:45.141 "data_size": 65536 00:15:45.141 }, 00:15:45.141 { 00:15:45.141 "name": null, 00:15:45.141 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:45.141 "is_configured": false, 00:15:45.141 "data_offset": 0, 00:15:45.141 "data_size": 65536 00:15:45.141 }, 00:15:45.141 { 00:15:45.141 "name": "BaseBdev4", 00:15:45.141 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:45.141 "is_configured": true, 00:15:45.141 "data_offset": 0, 00:15:45.141 "data_size": 65536 00:15:45.141 } 00:15:45.141 ] 00:15:45.141 }' 00:15:45.141 03:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.141 03:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 03:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.739 03:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.997 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:15:45.997 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:46.254 [2024-05-15 03:10:17.301327] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.254 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.512 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.512 "name": "Existed_Raid", 00:15:46.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.512 "strip_size_kb": 64, 00:15:46.512 "state": "configuring", 00:15:46.512 "raid_level": "raid0", 00:15:46.512 "superblock": false, 00:15:46.512 "num_base_bdevs": 4, 00:15:46.512 "num_base_bdevs_discovered": 3, 00:15:46.512 "num_base_bdevs_operational": 4, 00:15:46.512 "base_bdevs_list": [ 00:15:46.512 { 00:15:46.512 "name": "BaseBdev1", 00:15:46.512 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:46.512 "is_configured": true, 00:15:46.512 "data_offset": 0, 00:15:46.512 "data_size": 65536 00:15:46.512 }, 00:15:46.512 { 00:15:46.512 "name": null, 00:15:46.512 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:46.512 "is_configured": false, 00:15:46.512 "data_offset": 0, 00:15:46.512 "data_size": 65536 00:15:46.512 }, 00:15:46.512 { 00:15:46.512 "name": "BaseBdev3", 00:15:46.512 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:46.512 "is_configured": true, 00:15:46.512 "data_offset": 0, 00:15:46.512 "data_size": 65536 00:15:46.512 }, 00:15:46.512 { 00:15:46.512 "name": "BaseBdev4", 00:15:46.512 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:46.512 "is_configured": true, 00:15:46.512 "data_offset": 0, 00:15:46.512 "data_size": 65536 00:15:46.512 } 00:15:46.512 ] 00:15:46.512 }' 00:15:46.512 03:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.512 03:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.077 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.077 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:47.335 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:47.335 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:47.593 [2024-05-15 03:10:18.677028] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.593 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.594 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.594 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.594 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.594 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.851 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.851 "name": "Existed_Raid", 00:15:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.851 "strip_size_kb": 64, 00:15:47.851 "state": "configuring", 00:15:47.851 "raid_level": "raid0", 00:15:47.851 "superblock": false, 00:15:47.851 "num_base_bdevs": 4, 00:15:47.851 "num_base_bdevs_discovered": 2, 00:15:47.851 "num_base_bdevs_operational": 4, 00:15:47.851 "base_bdevs_list": [ 00:15:47.851 { 00:15:47.851 "name": null, 00:15:47.851 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:47.851 "is_configured": false, 00:15:47.851 "data_offset": 0, 00:15:47.851 "data_size": 65536 00:15:47.851 }, 00:15:47.851 { 00:15:47.851 "name": null, 00:15:47.851 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:47.851 "is_configured": false, 00:15:47.851 "data_offset": 0, 00:15:47.851 "data_size": 65536 00:15:47.851 }, 00:15:47.851 { 00:15:47.851 "name": "BaseBdev3", 00:15:47.851 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:47.851 "is_configured": true, 00:15:47.851 "data_offset": 0, 00:15:47.851 "data_size": 65536 00:15:47.851 }, 00:15:47.851 { 00:15:47.851 "name": "BaseBdev4", 00:15:47.851 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:47.851 "is_configured": true, 00:15:47.851 "data_offset": 0, 00:15:47.851 "data_size": 65536 00:15:47.851 } 00:15:47.851 ] 00:15:47.851 }' 00:15:47.851 03:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.851 03:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.785 03:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.785 03:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.785 03:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:48.785 03:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.043 [2024-05-15 03:10:20.083256] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.043 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.300 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.300 "name": "Existed_Raid", 00:15:49.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.300 "strip_size_kb": 64, 00:15:49.300 "state": "configuring", 00:15:49.300 "raid_level": "raid0", 00:15:49.300 "superblock": false, 00:15:49.300 "num_base_bdevs": 4, 00:15:49.300 "num_base_bdevs_discovered": 3, 00:15:49.300 "num_base_bdevs_operational": 4, 00:15:49.300 "base_bdevs_list": [ 00:15:49.300 { 00:15:49.300 "name": null, 00:15:49.300 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:49.300 "is_configured": false, 00:15:49.300 "data_offset": 0, 00:15:49.300 "data_size": 65536 00:15:49.300 }, 00:15:49.300 { 00:15:49.300 "name": "BaseBdev2", 00:15:49.300 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:49.300 "is_configured": true, 00:15:49.300 "data_offset": 0, 00:15:49.300 "data_size": 65536 00:15:49.300 }, 00:15:49.300 { 00:15:49.300 "name": "BaseBdev3", 00:15:49.300 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:49.300 "is_configured": true, 00:15:49.300 "data_offset": 0, 00:15:49.300 "data_size": 65536 00:15:49.300 }, 00:15:49.300 { 00:15:49.300 "name": "BaseBdev4", 00:15:49.300 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:49.300 "is_configured": true, 00:15:49.300 "data_offset": 0, 00:15:49.300 "data_size": 65536 00:15:49.300 } 00:15:49.300 ] 00:15:49.300 }' 00:15:49.301 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.301 03:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.866 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.866 03:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.123 03:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:15:50.123 03:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.123 03:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.381 03:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1437c03b-e061-4fb0-b8d0-87db05015dec 00:15:50.640 [2024-05-15 03:10:21.738955] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:50.640 [2024-05-15 03:10:21.738988] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x23ff3e0 00:15:50.640 [2024-05-15 03:10:21.738995] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:50.640 [2024-05-15 03:10:21.739192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23eb610 00:15:50.640 [2024-05-15 03:10:21.739318] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23ff3e0 00:15:50.640 [2024-05-15 03:10:21.739327] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23ff3e0 00:15:50.640 [2024-05-15 03:10:21.739483] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.640 NewBaseBdev 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:50.640 03:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.897 03:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.155 [ 00:15:51.155 { 00:15:51.155 "name": "NewBaseBdev", 00:15:51.155 "aliases": [ 00:15:51.155 "1437c03b-e061-4fb0-b8d0-87db05015dec" 00:15:51.155 ], 00:15:51.155 "product_name": "Malloc disk", 00:15:51.155 "block_size": 512, 00:15:51.155 "num_blocks": 65536, 00:15:51.155 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:51.155 "assigned_rate_limits": { 00:15:51.155 "rw_ios_per_sec": 0, 00:15:51.155 "rw_mbytes_per_sec": 0, 00:15:51.155 "r_mbytes_per_sec": 0, 00:15:51.155 "w_mbytes_per_sec": 0 00:15:51.155 }, 00:15:51.155 "claimed": true, 00:15:51.155 "claim_type": "exclusive_write", 00:15:51.155 "zoned": false, 00:15:51.155 "supported_io_types": { 00:15:51.155 "read": true, 00:15:51.155 "write": true, 00:15:51.155 "unmap": true, 00:15:51.155 "write_zeroes": true, 00:15:51.155 "flush": true, 00:15:51.155 "reset": true, 00:15:51.155 "compare": false, 00:15:51.155 "compare_and_write": false, 00:15:51.155 "abort": true, 00:15:51.155 "nvme_admin": false, 00:15:51.155 "nvme_io": false 00:15:51.155 }, 00:15:51.155 "memory_domains": [ 00:15:51.155 { 00:15:51.155 "dma_device_id": "system", 00:15:51.155 "dma_device_type": 1 00:15:51.155 }, 00:15:51.155 { 00:15:51.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.155 "dma_device_type": 2 00:15:51.155 } 00:15:51.155 ], 00:15:51.155 "driver_specific": {} 00:15:51.155 } 00:15:51.155 ] 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.155 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.156 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.413 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.413 "name": "Existed_Raid", 00:15:51.413 "uuid": "9e9e025b-7a5a-462d-9923-a2dd6124da70", 00:15:51.413 "strip_size_kb": 64, 00:15:51.413 "state": "online", 00:15:51.413 "raid_level": "raid0", 00:15:51.413 "superblock": false, 00:15:51.413 "num_base_bdevs": 4, 00:15:51.413 "num_base_bdevs_discovered": 4, 00:15:51.413 "num_base_bdevs_operational": 4, 00:15:51.413 "base_bdevs_list": [ 00:15:51.413 { 00:15:51.413 "name": "NewBaseBdev", 00:15:51.413 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:51.413 "is_configured": true, 00:15:51.413 "data_offset": 0, 00:15:51.413 "data_size": 65536 00:15:51.413 }, 00:15:51.413 { 00:15:51.413 "name": "BaseBdev2", 00:15:51.413 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:51.413 "is_configured": true, 00:15:51.413 "data_offset": 0, 00:15:51.413 "data_size": 65536 00:15:51.413 }, 00:15:51.413 { 00:15:51.413 "name": "BaseBdev3", 00:15:51.413 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:51.413 "is_configured": true, 00:15:51.413 "data_offset": 0, 00:15:51.413 "data_size": 65536 00:15:51.413 }, 00:15:51.413 { 00:15:51.413 "name": "BaseBdev4", 00:15:51.413 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:51.413 "is_configured": true, 00:15:51.413 "data_offset": 0, 00:15:51.413 "data_size": 65536 00:15:51.413 } 00:15:51.413 ] 00:15:51.413 }' 00:15:51.413 03:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.413 03:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:52.347 [2024-05-15 03:10:23.375684] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:52.347 "name": "Existed_Raid", 00:15:52.347 "aliases": [ 00:15:52.347 "9e9e025b-7a5a-462d-9923-a2dd6124da70" 00:15:52.347 ], 00:15:52.347 "product_name": "Raid Volume", 00:15:52.347 "block_size": 512, 00:15:52.347 "num_blocks": 262144, 00:15:52.347 "uuid": "9e9e025b-7a5a-462d-9923-a2dd6124da70", 00:15:52.347 "assigned_rate_limits": { 00:15:52.347 "rw_ios_per_sec": 0, 00:15:52.347 "rw_mbytes_per_sec": 0, 00:15:52.347 "r_mbytes_per_sec": 0, 00:15:52.347 "w_mbytes_per_sec": 0 00:15:52.347 }, 00:15:52.347 "claimed": false, 00:15:52.347 "zoned": false, 00:15:52.347 "supported_io_types": { 00:15:52.347 "read": true, 00:15:52.347 "write": true, 00:15:52.347 "unmap": true, 00:15:52.347 "write_zeroes": true, 00:15:52.347 "flush": true, 00:15:52.347 "reset": true, 00:15:52.347 "compare": false, 00:15:52.347 "compare_and_write": false, 00:15:52.347 "abort": false, 00:15:52.347 "nvme_admin": false, 00:15:52.347 "nvme_io": false 00:15:52.347 }, 00:15:52.347 "memory_domains": [ 00:15:52.347 { 00:15:52.347 "dma_device_id": "system", 00:15:52.347 "dma_device_type": 1 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.347 "dma_device_type": 2 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "system", 00:15:52.347 "dma_device_type": 1 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.347 "dma_device_type": 2 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "system", 00:15:52.347 "dma_device_type": 1 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.347 "dma_device_type": 2 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "system", 00:15:52.347 "dma_device_type": 1 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.347 "dma_device_type": 2 00:15:52.347 } 00:15:52.347 ], 00:15:52.347 "driver_specific": { 00:15:52.347 "raid": { 00:15:52.347 "uuid": "9e9e025b-7a5a-462d-9923-a2dd6124da70", 00:15:52.347 "strip_size_kb": 64, 00:15:52.347 "state": "online", 00:15:52.347 "raid_level": "raid0", 00:15:52.347 "superblock": false, 00:15:52.347 "num_base_bdevs": 4, 00:15:52.347 "num_base_bdevs_discovered": 4, 00:15:52.347 "num_base_bdevs_operational": 4, 00:15:52.347 "base_bdevs_list": [ 00:15:52.347 { 00:15:52.347 "name": "NewBaseBdev", 00:15:52.347 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:52.347 "is_configured": true, 00:15:52.347 "data_offset": 0, 00:15:52.347 "data_size": 65536 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "name": "BaseBdev2", 00:15:52.347 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:52.347 "is_configured": true, 00:15:52.347 "data_offset": 0, 00:15:52.347 "data_size": 65536 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "name": "BaseBdev3", 00:15:52.347 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:52.347 "is_configured": true, 00:15:52.347 "data_offset": 0, 00:15:52.347 "data_size": 65536 00:15:52.347 }, 00:15:52.347 { 00:15:52.347 "name": "BaseBdev4", 00:15:52.347 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:52.347 "is_configured": true, 00:15:52.347 "data_offset": 0, 00:15:52.347 "data_size": 65536 00:15:52.347 } 00:15:52.347 ] 00:15:52.347 } 00:15:52.347 } 00:15:52.347 }' 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:15:52.347 BaseBdev2 00:15:52.347 BaseBdev3 00:15:52.347 BaseBdev4' 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:52.347 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:52.605 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:52.605 "name": "NewBaseBdev", 00:15:52.605 "aliases": [ 00:15:52.605 "1437c03b-e061-4fb0-b8d0-87db05015dec" 00:15:52.605 ], 00:15:52.605 "product_name": "Malloc disk", 00:15:52.605 "block_size": 512, 00:15:52.605 "num_blocks": 65536, 00:15:52.605 "uuid": "1437c03b-e061-4fb0-b8d0-87db05015dec", 00:15:52.605 "assigned_rate_limits": { 00:15:52.605 "rw_ios_per_sec": 0, 00:15:52.605 "rw_mbytes_per_sec": 0, 00:15:52.605 "r_mbytes_per_sec": 0, 00:15:52.605 "w_mbytes_per_sec": 0 00:15:52.605 }, 00:15:52.605 "claimed": true, 00:15:52.605 "claim_type": "exclusive_write", 00:15:52.605 "zoned": false, 00:15:52.605 "supported_io_types": { 00:15:52.605 "read": true, 00:15:52.605 "write": true, 00:15:52.605 "unmap": true, 00:15:52.605 "write_zeroes": true, 00:15:52.605 "flush": true, 00:15:52.605 "reset": true, 00:15:52.605 "compare": false, 00:15:52.605 "compare_and_write": false, 00:15:52.605 "abort": true, 00:15:52.605 "nvme_admin": false, 00:15:52.605 "nvme_io": false 00:15:52.605 }, 00:15:52.605 "memory_domains": [ 00:15:52.605 { 00:15:52.605 "dma_device_id": "system", 00:15:52.605 "dma_device_type": 1 00:15:52.605 }, 00:15:52.605 { 00:15:52.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.605 "dma_device_type": 2 00:15:52.605 } 00:15:52.605 ], 00:15:52.605 "driver_specific": {} 00:15:52.605 }' 00:15:52.605 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:52.605 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.863 03:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:52.863 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:53.121 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:53.121 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:53.121 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:53.121 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:53.378 "name": "BaseBdev2", 00:15:53.378 "aliases": [ 00:15:53.378 "909969e8-443e-4bd5-821e-6728706316bc" 00:15:53.378 ], 00:15:53.378 "product_name": "Malloc disk", 00:15:53.378 "block_size": 512, 00:15:53.378 "num_blocks": 65536, 00:15:53.378 "uuid": "909969e8-443e-4bd5-821e-6728706316bc", 00:15:53.378 "assigned_rate_limits": { 00:15:53.378 "rw_ios_per_sec": 0, 00:15:53.378 "rw_mbytes_per_sec": 0, 00:15:53.378 "r_mbytes_per_sec": 0, 00:15:53.378 "w_mbytes_per_sec": 0 00:15:53.378 }, 00:15:53.378 "claimed": true, 00:15:53.378 "claim_type": "exclusive_write", 00:15:53.378 "zoned": false, 00:15:53.378 "supported_io_types": { 00:15:53.378 "read": true, 00:15:53.378 "write": true, 00:15:53.378 "unmap": true, 00:15:53.378 "write_zeroes": true, 00:15:53.378 "flush": true, 00:15:53.378 "reset": true, 00:15:53.378 "compare": false, 00:15:53.378 "compare_and_write": false, 00:15:53.378 "abort": true, 00:15:53.378 "nvme_admin": false, 00:15:53.378 "nvme_io": false 00:15:53.378 }, 00:15:53.378 "memory_domains": [ 00:15:53.378 { 00:15:53.378 "dma_device_id": "system", 00:15:53.378 "dma_device_type": 1 00:15:53.378 }, 00:15:53.378 { 00:15:53.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.378 "dma_device_type": 2 00:15:53.378 } 00:15:53.378 ], 00:15:53.378 "driver_specific": {} 00:15:53.378 }' 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.378 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:53.636 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:53.893 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:53.893 "name": "BaseBdev3", 00:15:53.893 "aliases": [ 00:15:53.893 "ce720030-4e0c-441a-a472-080dc9ce8718" 00:15:53.893 ], 00:15:53.893 "product_name": "Malloc disk", 00:15:53.893 "block_size": 512, 00:15:53.893 "num_blocks": 65536, 00:15:53.893 "uuid": "ce720030-4e0c-441a-a472-080dc9ce8718", 00:15:53.893 "assigned_rate_limits": { 00:15:53.893 "rw_ios_per_sec": 0, 00:15:53.893 "rw_mbytes_per_sec": 0, 00:15:53.893 "r_mbytes_per_sec": 0, 00:15:53.893 "w_mbytes_per_sec": 0 00:15:53.893 }, 00:15:53.893 "claimed": true, 00:15:53.893 "claim_type": "exclusive_write", 00:15:53.893 "zoned": false, 00:15:53.893 "supported_io_types": { 00:15:53.893 "read": true, 00:15:53.893 "write": true, 00:15:53.893 "unmap": true, 00:15:53.893 "write_zeroes": true, 00:15:53.893 "flush": true, 00:15:53.893 "reset": true, 00:15:53.893 "compare": false, 00:15:53.893 "compare_and_write": false, 00:15:53.893 "abort": true, 00:15:53.893 "nvme_admin": false, 00:15:53.893 "nvme_io": false 00:15:53.893 }, 00:15:53.893 "memory_domains": [ 00:15:53.893 { 00:15:53.893 "dma_device_id": "system", 00:15:53.893 "dma_device_type": 1 00:15:53.893 }, 00:15:53.893 { 00:15:53.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.893 "dma_device_type": 2 00:15:53.893 } 00:15:53.893 ], 00:15:53.893 "driver_specific": {} 00:15:53.893 }' 00:15:53.893 03:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:53.893 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:53.893 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:53.893 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:54.151 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:54.409 "name": "BaseBdev4", 00:15:54.409 "aliases": [ 00:15:54.409 "1689d804-3517-46d3-ace1-34d0b1a347a3" 00:15:54.409 ], 00:15:54.409 "product_name": "Malloc disk", 00:15:54.409 "block_size": 512, 00:15:54.409 "num_blocks": 65536, 00:15:54.409 "uuid": "1689d804-3517-46d3-ace1-34d0b1a347a3", 00:15:54.409 "assigned_rate_limits": { 00:15:54.409 "rw_ios_per_sec": 0, 00:15:54.409 "rw_mbytes_per_sec": 0, 00:15:54.409 "r_mbytes_per_sec": 0, 00:15:54.409 "w_mbytes_per_sec": 0 00:15:54.409 }, 00:15:54.409 "claimed": true, 00:15:54.409 "claim_type": "exclusive_write", 00:15:54.409 "zoned": false, 00:15:54.409 "supported_io_types": { 00:15:54.409 "read": true, 00:15:54.409 "write": true, 00:15:54.409 "unmap": true, 00:15:54.409 "write_zeroes": true, 00:15:54.409 "flush": true, 00:15:54.409 "reset": true, 00:15:54.409 "compare": false, 00:15:54.409 "compare_and_write": false, 00:15:54.409 "abort": true, 00:15:54.409 "nvme_admin": false, 00:15:54.409 "nvme_io": false 00:15:54.409 }, 00:15:54.409 "memory_domains": [ 00:15:54.409 { 00:15:54.409 "dma_device_id": "system", 00:15:54.409 "dma_device_type": 1 00:15:54.409 }, 00:15:54.409 { 00:15:54.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.409 "dma_device_type": 2 00:15:54.409 } 00:15:54.409 ], 00:15:54.409 "driver_specific": {} 00:15:54.409 }' 00:15:54.409 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:54.666 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:54.667 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:54.924 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:54.924 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:54.924 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:54.924 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:54.924 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:55.181 [2024-05-15 03:10:26.162831] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.181 [2024-05-15 03:10:26.162861] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.181 [2024-05-15 03:10:26.162910] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.181 [2024-05-15 03:10:26.162970] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.181 [2024-05-15 03:10:26.162983] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23ff3e0 name Existed_Raid, state offline 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4102788 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4102788 ']' 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4102788 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4102788 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4102788' 00:15:55.181 killing process with pid 4102788 00:15:55.181 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4102788 00:15:55.181 [2024-05-15 03:10:26.226216] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.182 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4102788 00:15:55.182 [2024-05-15 03:10:26.258972] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:15:55.439 00:15:55.439 real 0m33.294s 00:15:55.439 user 1m2.455s 00:15:55.439 sys 0m4.634s 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.439 ************************************ 00:15:55.439 END TEST raid_state_function_test 00:15:55.439 ************************************ 00:15:55.439 03:10:26 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:55.439 03:10:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:55.439 03:10:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.439 03:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.439 ************************************ 00:15:55.439 START TEST raid_state_function_test_sb 00:15:55.439 ************************************ 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4108902 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4108902' 00:15:55.439 Process raid pid: 4108902 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4108902 /var/tmp/spdk-raid.sock 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4108902 ']' 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:55.439 03:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.697 [2024-05-15 03:10:26.614889] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:55.697 [2024-05-15 03:10:26.614940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.697 [2024-05-15 03:10:26.712966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.697 [2024-05-15 03:10:26.806877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.954 [2024-05-15 03:10:26.868730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.954 [2024-05-15 03:10:26.868763] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.518 03:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.518 03:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:56.518 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:56.776 [2024-05-15 03:10:27.796135] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.776 [2024-05-15 03:10:27.796173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.776 [2024-05-15 03:10:27.796186] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.776 [2024-05-15 03:10:27.796195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.776 [2024-05-15 03:10:27.796201] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.776 [2024-05-15 03:10:27.796210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.776 [2024-05-15 03:10:27.796216] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.776 [2024-05-15 03:10:27.796224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.776 03:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.034 03:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.034 "name": "Existed_Raid", 00:15:57.034 "uuid": "b38f5941-514c-43c4-870b-8af82db4772b", 00:15:57.034 "strip_size_kb": 64, 00:15:57.034 "state": "configuring", 00:15:57.034 "raid_level": "raid0", 00:15:57.034 "superblock": true, 00:15:57.034 "num_base_bdevs": 4, 00:15:57.034 "num_base_bdevs_discovered": 0, 00:15:57.034 "num_base_bdevs_operational": 4, 00:15:57.034 "base_bdevs_list": [ 00:15:57.034 { 00:15:57.034 "name": "BaseBdev1", 00:15:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.034 "is_configured": false, 00:15:57.034 "data_offset": 0, 00:15:57.034 "data_size": 0 00:15:57.034 }, 00:15:57.034 { 00:15:57.034 "name": "BaseBdev2", 00:15:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.034 "is_configured": false, 00:15:57.034 "data_offset": 0, 00:15:57.034 "data_size": 0 00:15:57.034 }, 00:15:57.034 { 00:15:57.034 "name": "BaseBdev3", 00:15:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.034 "is_configured": false, 00:15:57.034 "data_offset": 0, 00:15:57.034 "data_size": 0 00:15:57.034 }, 00:15:57.034 { 00:15:57.034 "name": "BaseBdev4", 00:15:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.034 "is_configured": false, 00:15:57.034 "data_offset": 0, 00:15:57.034 "data_size": 0 00:15:57.034 } 00:15:57.034 ] 00:15:57.034 }' 00:15:57.034 03:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.034 03:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.599 03:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.857 [2024-05-15 03:10:28.927013] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.857 [2024-05-15 03:10:28.927044] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12cee00 name Existed_Raid, state configuring 00:15:57.857 03:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:58.116 [2024-05-15 03:10:29.175700] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.116 [2024-05-15 03:10:29.175730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.116 [2024-05-15 03:10:29.175738] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.116 [2024-05-15 03:10:29.175747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.116 [2024-05-15 03:10:29.175754] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.116 [2024-05-15 03:10:29.175762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.116 [2024-05-15 03:10:29.175769] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:58.116 [2024-05-15 03:10:29.175777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:58.116 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.374 [2024-05-15 03:10:29.433941] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.374 BaseBdev1 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:58.374 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.631 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.889 [ 00:15:58.889 { 00:15:58.889 "name": "BaseBdev1", 00:15:58.889 "aliases": [ 00:15:58.889 "548102ae-5f45-4a4e-8b60-7085bf75f142" 00:15:58.889 ], 00:15:58.889 "product_name": "Malloc disk", 00:15:58.889 "block_size": 512, 00:15:58.889 "num_blocks": 65536, 00:15:58.889 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:15:58.889 "assigned_rate_limits": { 00:15:58.889 "rw_ios_per_sec": 0, 00:15:58.889 "rw_mbytes_per_sec": 0, 00:15:58.889 "r_mbytes_per_sec": 0, 00:15:58.889 "w_mbytes_per_sec": 0 00:15:58.889 }, 00:15:58.889 "claimed": true, 00:15:58.889 "claim_type": "exclusive_write", 00:15:58.889 "zoned": false, 00:15:58.890 "supported_io_types": { 00:15:58.890 "read": true, 00:15:58.890 "write": true, 00:15:58.890 "unmap": true, 00:15:58.890 "write_zeroes": true, 00:15:58.890 "flush": true, 00:15:58.890 "reset": true, 00:15:58.890 "compare": false, 00:15:58.890 "compare_and_write": false, 00:15:58.890 "abort": true, 00:15:58.890 "nvme_admin": false, 00:15:58.890 "nvme_io": false 00:15:58.890 }, 00:15:58.890 "memory_domains": [ 00:15:58.890 { 00:15:58.890 "dma_device_id": "system", 00:15:58.890 "dma_device_type": 1 00:15:58.890 }, 00:15:58.890 { 00:15:58.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.890 "dma_device_type": 2 00:15:58.890 } 00:15:58.890 ], 00:15:58.890 "driver_specific": {} 00:15:58.890 } 00:15:58.890 ] 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.890 03:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.147 03:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.147 "name": "Existed_Raid", 00:15:59.147 "uuid": "a0154385-6d9d-487c-afc7-10961c53362d", 00:15:59.147 "strip_size_kb": 64, 00:15:59.147 "state": "configuring", 00:15:59.147 "raid_level": "raid0", 00:15:59.147 "superblock": true, 00:15:59.147 "num_base_bdevs": 4, 00:15:59.147 "num_base_bdevs_discovered": 1, 00:15:59.147 "num_base_bdevs_operational": 4, 00:15:59.147 "base_bdevs_list": [ 00:15:59.147 { 00:15:59.147 "name": "BaseBdev1", 00:15:59.147 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:15:59.147 "is_configured": true, 00:15:59.147 "data_offset": 2048, 00:15:59.147 "data_size": 63488 00:15:59.147 }, 00:15:59.147 { 00:15:59.147 "name": "BaseBdev2", 00:15:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.147 "is_configured": false, 00:15:59.147 "data_offset": 0, 00:15:59.147 "data_size": 0 00:15:59.147 }, 00:15:59.147 { 00:15:59.147 "name": "BaseBdev3", 00:15:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.147 "is_configured": false, 00:15:59.147 "data_offset": 0, 00:15:59.147 "data_size": 0 00:15:59.147 }, 00:15:59.147 { 00:15:59.147 "name": "BaseBdev4", 00:15:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.147 "is_configured": false, 00:15:59.148 "data_offset": 0, 00:15:59.148 "data_size": 0 00:15:59.148 } 00:15:59.148 ] 00:15:59.148 }' 00:15:59.148 03:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.148 03:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 03:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.969 [2024-05-15 03:10:30.966048] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.969 [2024-05-15 03:10:30.966088] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12cf0a0 name Existed_Raid, state configuring 00:15:59.969 03:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:00.226 [2024-05-15 03:10:31.222781] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.226 [2024-05-15 03:10:31.224293] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.226 [2024-05-15 03:10:31.224323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.226 [2024-05-15 03:10:31.224332] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.226 [2024-05-15 03:10:31.224340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.226 [2024-05-15 03:10:31.224347] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.226 [2024-05-15 03:10:31.224355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.226 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.483 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.483 "name": "Existed_Raid", 00:16:00.483 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:00.483 "strip_size_kb": 64, 00:16:00.483 "state": "configuring", 00:16:00.483 "raid_level": "raid0", 00:16:00.483 "superblock": true, 00:16:00.483 "num_base_bdevs": 4, 00:16:00.483 "num_base_bdevs_discovered": 1, 00:16:00.483 "num_base_bdevs_operational": 4, 00:16:00.483 "base_bdevs_list": [ 00:16:00.483 { 00:16:00.483 "name": "BaseBdev1", 00:16:00.483 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:00.483 "is_configured": true, 00:16:00.483 "data_offset": 2048, 00:16:00.483 "data_size": 63488 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev2", 00:16:00.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.483 "is_configured": false, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 0 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev3", 00:16:00.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.483 "is_configured": false, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 0 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev4", 00:16:00.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.483 "is_configured": false, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 0 00:16:00.483 } 00:16:00.483 ] 00:16:00.483 }' 00:16:00.483 03:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.483 03:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.048 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.305 [2024-05-15 03:10:32.280726] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.305 BaseBdev2 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:01.305 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:01.306 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.563 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.820 [ 00:16:01.820 { 00:16:01.820 "name": "BaseBdev2", 00:16:01.820 "aliases": [ 00:16:01.820 "c2cf3592-d8eb-4104-af15-17dc968c8ce5" 00:16:01.820 ], 00:16:01.820 "product_name": "Malloc disk", 00:16:01.820 "block_size": 512, 00:16:01.820 "num_blocks": 65536, 00:16:01.820 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:01.820 "assigned_rate_limits": { 00:16:01.820 "rw_ios_per_sec": 0, 00:16:01.820 "rw_mbytes_per_sec": 0, 00:16:01.820 "r_mbytes_per_sec": 0, 00:16:01.820 "w_mbytes_per_sec": 0 00:16:01.820 }, 00:16:01.820 "claimed": true, 00:16:01.820 "claim_type": "exclusive_write", 00:16:01.820 "zoned": false, 00:16:01.820 "supported_io_types": { 00:16:01.820 "read": true, 00:16:01.820 "write": true, 00:16:01.820 "unmap": true, 00:16:01.820 "write_zeroes": true, 00:16:01.820 "flush": true, 00:16:01.820 "reset": true, 00:16:01.820 "compare": false, 00:16:01.820 "compare_and_write": false, 00:16:01.820 "abort": true, 00:16:01.820 "nvme_admin": false, 00:16:01.820 "nvme_io": false 00:16:01.820 }, 00:16:01.820 "memory_domains": [ 00:16:01.820 { 00:16:01.820 "dma_device_id": "system", 00:16:01.820 "dma_device_type": 1 00:16:01.820 }, 00:16:01.820 { 00:16:01.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.820 "dma_device_type": 2 00:16:01.820 } 00:16:01.820 ], 00:16:01.820 "driver_specific": {} 00:16:01.820 } 00:16:01.820 ] 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.820 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.821 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.821 03:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.105 03:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.105 "name": "Existed_Raid", 00:16:02.105 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:02.105 "strip_size_kb": 64, 00:16:02.105 "state": "configuring", 00:16:02.105 "raid_level": "raid0", 00:16:02.105 "superblock": true, 00:16:02.105 "num_base_bdevs": 4, 00:16:02.105 "num_base_bdevs_discovered": 2, 00:16:02.105 "num_base_bdevs_operational": 4, 00:16:02.105 "base_bdevs_list": [ 00:16:02.105 { 00:16:02.105 "name": "BaseBdev1", 00:16:02.105 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:02.105 "is_configured": true, 00:16:02.105 "data_offset": 2048, 00:16:02.105 "data_size": 63488 00:16:02.105 }, 00:16:02.105 { 00:16:02.105 "name": "BaseBdev2", 00:16:02.105 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:02.105 "is_configured": true, 00:16:02.105 "data_offset": 2048, 00:16:02.105 "data_size": 63488 00:16:02.105 }, 00:16:02.105 { 00:16:02.105 "name": "BaseBdev3", 00:16:02.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.105 "is_configured": false, 00:16:02.105 "data_offset": 0, 00:16:02.105 "data_size": 0 00:16:02.105 }, 00:16:02.105 { 00:16:02.105 "name": "BaseBdev4", 00:16:02.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.105 "is_configured": false, 00:16:02.105 "data_offset": 0, 00:16:02.105 "data_size": 0 00:16:02.105 } 00:16:02.105 ] 00:16:02.105 }' 00:16:02.105 03:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.105 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 03:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.927 [2024-05-15 03:10:33.940462] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.927 BaseBdev3 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:02.927 03:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.185 03:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.442 [ 00:16:03.442 { 00:16:03.442 "name": "BaseBdev3", 00:16:03.442 "aliases": [ 00:16:03.442 "654fa088-2857-4d99-b5ab-6cfbdeaed750" 00:16:03.442 ], 00:16:03.442 "product_name": "Malloc disk", 00:16:03.442 "block_size": 512, 00:16:03.442 "num_blocks": 65536, 00:16:03.442 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:03.442 "assigned_rate_limits": { 00:16:03.442 "rw_ios_per_sec": 0, 00:16:03.442 "rw_mbytes_per_sec": 0, 00:16:03.442 "r_mbytes_per_sec": 0, 00:16:03.442 "w_mbytes_per_sec": 0 00:16:03.442 }, 00:16:03.442 "claimed": true, 00:16:03.442 "claim_type": "exclusive_write", 00:16:03.442 "zoned": false, 00:16:03.442 "supported_io_types": { 00:16:03.442 "read": true, 00:16:03.442 "write": true, 00:16:03.442 "unmap": true, 00:16:03.442 "write_zeroes": true, 00:16:03.442 "flush": true, 00:16:03.442 "reset": true, 00:16:03.442 "compare": false, 00:16:03.442 "compare_and_write": false, 00:16:03.442 "abort": true, 00:16:03.442 "nvme_admin": false, 00:16:03.442 "nvme_io": false 00:16:03.442 }, 00:16:03.442 "memory_domains": [ 00:16:03.442 { 00:16:03.442 "dma_device_id": "system", 00:16:03.442 "dma_device_type": 1 00:16:03.442 }, 00:16:03.442 { 00:16:03.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.442 "dma_device_type": 2 00:16:03.442 } 00:16:03.442 ], 00:16:03.442 "driver_specific": {} 00:16:03.442 } 00:16:03.442 ] 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.442 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.443 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.700 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.700 "name": "Existed_Raid", 00:16:03.700 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:03.700 "strip_size_kb": 64, 00:16:03.700 "state": "configuring", 00:16:03.700 "raid_level": "raid0", 00:16:03.700 "superblock": true, 00:16:03.700 "num_base_bdevs": 4, 00:16:03.700 "num_base_bdevs_discovered": 3, 00:16:03.700 "num_base_bdevs_operational": 4, 00:16:03.700 "base_bdevs_list": [ 00:16:03.700 { 00:16:03.701 "name": "BaseBdev1", 00:16:03.701 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:03.701 "is_configured": true, 00:16:03.701 "data_offset": 2048, 00:16:03.701 "data_size": 63488 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "BaseBdev2", 00:16:03.701 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:03.701 "is_configured": true, 00:16:03.701 "data_offset": 2048, 00:16:03.701 "data_size": 63488 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "BaseBdev3", 00:16:03.701 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:03.701 "is_configured": true, 00:16:03.701 "data_offset": 2048, 00:16:03.701 "data_size": 63488 00:16:03.701 }, 00:16:03.701 { 00:16:03.701 "name": "BaseBdev4", 00:16:03.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.701 "is_configured": false, 00:16:03.701 "data_offset": 0, 00:16:03.701 "data_size": 0 00:16:03.701 } 00:16:03.701 ] 00:16:03.701 }' 00:16:03.701 03:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.701 03:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.266 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:04.524 [2024-05-15 03:10:35.572009] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.524 [2024-05-15 03:10:35.572190] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x12ce670 00:16:04.524 [2024-05-15 03:10:35.572204] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:04.524 [2024-05-15 03:10:35.572399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12d0a30 00:16:04.524 [2024-05-15 03:10:35.572527] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12ce670 00:16:04.524 [2024-05-15 03:10:35.572536] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12ce670 00:16:04.524 [2024-05-15 03:10:35.572631] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.524 BaseBdev4 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:04.524 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.782 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.038 [ 00:16:05.038 { 00:16:05.038 "name": "BaseBdev4", 00:16:05.038 "aliases": [ 00:16:05.038 "402dab4d-2241-476b-a7fb-cc522715f96a" 00:16:05.038 ], 00:16:05.038 "product_name": "Malloc disk", 00:16:05.038 "block_size": 512, 00:16:05.038 "num_blocks": 65536, 00:16:05.038 "uuid": "402dab4d-2241-476b-a7fb-cc522715f96a", 00:16:05.038 "assigned_rate_limits": { 00:16:05.038 "rw_ios_per_sec": 0, 00:16:05.038 "rw_mbytes_per_sec": 0, 00:16:05.038 "r_mbytes_per_sec": 0, 00:16:05.038 "w_mbytes_per_sec": 0 00:16:05.038 }, 00:16:05.038 "claimed": true, 00:16:05.038 "claim_type": "exclusive_write", 00:16:05.038 "zoned": false, 00:16:05.038 "supported_io_types": { 00:16:05.038 "read": true, 00:16:05.038 "write": true, 00:16:05.038 "unmap": true, 00:16:05.038 "write_zeroes": true, 00:16:05.038 "flush": true, 00:16:05.038 "reset": true, 00:16:05.038 "compare": false, 00:16:05.038 "compare_and_write": false, 00:16:05.038 "abort": true, 00:16:05.038 "nvme_admin": false, 00:16:05.038 "nvme_io": false 00:16:05.038 }, 00:16:05.038 "memory_domains": [ 00:16:05.039 { 00:16:05.039 "dma_device_id": "system", 00:16:05.039 "dma_device_type": 1 00:16:05.039 }, 00:16:05.039 { 00:16:05.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.039 "dma_device_type": 2 00:16:05.039 } 00:16:05.039 ], 00:16:05.039 "driver_specific": {} 00:16:05.039 } 00:16:05.039 ] 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.039 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.296 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.296 "name": "Existed_Raid", 00:16:05.296 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:05.296 "strip_size_kb": 64, 00:16:05.296 "state": "online", 00:16:05.296 "raid_level": "raid0", 00:16:05.296 "superblock": true, 00:16:05.296 "num_base_bdevs": 4, 00:16:05.296 "num_base_bdevs_discovered": 4, 00:16:05.296 "num_base_bdevs_operational": 4, 00:16:05.296 "base_bdevs_list": [ 00:16:05.296 { 00:16:05.296 "name": "BaseBdev1", 00:16:05.296 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 2048, 00:16:05.296 "data_size": 63488 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev2", 00:16:05.296 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 2048, 00:16:05.296 "data_size": 63488 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev3", 00:16:05.296 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 2048, 00:16:05.296 "data_size": 63488 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev4", 00:16:05.296 "uuid": "402dab4d-2241-476b-a7fb-cc522715f96a", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 2048, 00:16:05.296 "data_size": 63488 00:16:05.296 } 00:16:05.296 ] 00:16:05.296 }' 00:16:05.296 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.296 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:05.862 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:06.120 [2024-05-15 03:10:37.168600] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:06.120 "name": "Existed_Raid", 00:16:06.120 "aliases": [ 00:16:06.120 "d27c3d9e-7d88-41ed-8d99-13a523733573" 00:16:06.120 ], 00:16:06.120 "product_name": "Raid Volume", 00:16:06.120 "block_size": 512, 00:16:06.120 "num_blocks": 253952, 00:16:06.120 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:06.120 "assigned_rate_limits": { 00:16:06.120 "rw_ios_per_sec": 0, 00:16:06.120 "rw_mbytes_per_sec": 0, 00:16:06.120 "r_mbytes_per_sec": 0, 00:16:06.120 "w_mbytes_per_sec": 0 00:16:06.120 }, 00:16:06.120 "claimed": false, 00:16:06.120 "zoned": false, 00:16:06.120 "supported_io_types": { 00:16:06.120 "read": true, 00:16:06.120 "write": true, 00:16:06.120 "unmap": true, 00:16:06.120 "write_zeroes": true, 00:16:06.120 "flush": true, 00:16:06.120 "reset": true, 00:16:06.120 "compare": false, 00:16:06.120 "compare_and_write": false, 00:16:06.120 "abort": false, 00:16:06.120 "nvme_admin": false, 00:16:06.120 "nvme_io": false 00:16:06.120 }, 00:16:06.120 "memory_domains": [ 00:16:06.120 { 00:16:06.120 "dma_device_id": "system", 00:16:06.120 "dma_device_type": 1 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.120 "dma_device_type": 2 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "system", 00:16:06.120 "dma_device_type": 1 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.120 "dma_device_type": 2 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "system", 00:16:06.120 "dma_device_type": 1 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.120 "dma_device_type": 2 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "system", 00:16:06.120 "dma_device_type": 1 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.120 "dma_device_type": 2 00:16:06.120 } 00:16:06.120 ], 00:16:06.120 "driver_specific": { 00:16:06.120 "raid": { 00:16:06.120 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:06.120 "strip_size_kb": 64, 00:16:06.120 "state": "online", 00:16:06.120 "raid_level": "raid0", 00:16:06.120 "superblock": true, 00:16:06.120 "num_base_bdevs": 4, 00:16:06.120 "num_base_bdevs_discovered": 4, 00:16:06.120 "num_base_bdevs_operational": 4, 00:16:06.120 "base_bdevs_list": [ 00:16:06.120 { 00:16:06.120 "name": "BaseBdev1", 00:16:06.120 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:06.120 "is_configured": true, 00:16:06.120 "data_offset": 2048, 00:16:06.120 "data_size": 63488 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "name": "BaseBdev2", 00:16:06.120 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:06.120 "is_configured": true, 00:16:06.120 "data_offset": 2048, 00:16:06.120 "data_size": 63488 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "name": "BaseBdev3", 00:16:06.120 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:06.120 "is_configured": true, 00:16:06.120 "data_offset": 2048, 00:16:06.120 "data_size": 63488 00:16:06.120 }, 00:16:06.120 { 00:16:06.120 "name": "BaseBdev4", 00:16:06.120 "uuid": "402dab4d-2241-476b-a7fb-cc522715f96a", 00:16:06.120 "is_configured": true, 00:16:06.120 "data_offset": 2048, 00:16:06.120 "data_size": 63488 00:16:06.120 } 00:16:06.120 ] 00:16:06.120 } 00:16:06.120 } 00:16:06.120 }' 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:16:06.120 BaseBdev2 00:16:06.120 BaseBdev3 00:16:06.120 BaseBdev4' 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:06.120 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:06.378 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:06.378 "name": "BaseBdev1", 00:16:06.378 "aliases": [ 00:16:06.378 "548102ae-5f45-4a4e-8b60-7085bf75f142" 00:16:06.378 ], 00:16:06.378 "product_name": "Malloc disk", 00:16:06.378 "block_size": 512, 00:16:06.378 "num_blocks": 65536, 00:16:06.378 "uuid": "548102ae-5f45-4a4e-8b60-7085bf75f142", 00:16:06.378 "assigned_rate_limits": { 00:16:06.378 "rw_ios_per_sec": 0, 00:16:06.378 "rw_mbytes_per_sec": 0, 00:16:06.378 "r_mbytes_per_sec": 0, 00:16:06.378 "w_mbytes_per_sec": 0 00:16:06.378 }, 00:16:06.378 "claimed": true, 00:16:06.378 "claim_type": "exclusive_write", 00:16:06.378 "zoned": false, 00:16:06.378 "supported_io_types": { 00:16:06.378 "read": true, 00:16:06.378 "write": true, 00:16:06.378 "unmap": true, 00:16:06.378 "write_zeroes": true, 00:16:06.378 "flush": true, 00:16:06.378 "reset": true, 00:16:06.378 "compare": false, 00:16:06.378 "compare_and_write": false, 00:16:06.378 "abort": true, 00:16:06.378 "nvme_admin": false, 00:16:06.378 "nvme_io": false 00:16:06.378 }, 00:16:06.378 "memory_domains": [ 00:16:06.378 { 00:16:06.378 "dma_device_id": "system", 00:16:06.378 "dma_device_type": 1 00:16:06.378 }, 00:16:06.378 { 00:16:06.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.378 "dma_device_type": 2 00:16:06.378 } 00:16:06.378 ], 00:16:06.378 "driver_specific": {} 00:16:06.378 }' 00:16:06.378 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:06.378 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.636 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:06.894 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:06.894 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:06.894 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:06.894 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:06.894 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:07.152 "name": "BaseBdev2", 00:16:07.152 "aliases": [ 00:16:07.152 "c2cf3592-d8eb-4104-af15-17dc968c8ce5" 00:16:07.152 ], 00:16:07.152 "product_name": "Malloc disk", 00:16:07.152 "block_size": 512, 00:16:07.152 "num_blocks": 65536, 00:16:07.152 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:07.152 "assigned_rate_limits": { 00:16:07.152 "rw_ios_per_sec": 0, 00:16:07.152 "rw_mbytes_per_sec": 0, 00:16:07.152 "r_mbytes_per_sec": 0, 00:16:07.152 "w_mbytes_per_sec": 0 00:16:07.152 }, 00:16:07.152 "claimed": true, 00:16:07.152 "claim_type": "exclusive_write", 00:16:07.152 "zoned": false, 00:16:07.152 "supported_io_types": { 00:16:07.152 "read": true, 00:16:07.152 "write": true, 00:16:07.152 "unmap": true, 00:16:07.152 "write_zeroes": true, 00:16:07.152 "flush": true, 00:16:07.152 "reset": true, 00:16:07.152 "compare": false, 00:16:07.152 "compare_and_write": false, 00:16:07.152 "abort": true, 00:16:07.152 "nvme_admin": false, 00:16:07.152 "nvme_io": false 00:16:07.152 }, 00:16:07.152 "memory_domains": [ 00:16:07.152 { 00:16:07.152 "dma_device_id": "system", 00:16:07.152 "dma_device_type": 1 00:16:07.152 }, 00:16:07.152 { 00:16:07.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.152 "dma_device_type": 2 00:16:07.152 } 00:16:07.152 ], 00:16:07.152 "driver_specific": {} 00:16:07.152 }' 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.152 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:07.410 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:07.668 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:07.668 "name": "BaseBdev3", 00:16:07.668 "aliases": [ 00:16:07.668 "654fa088-2857-4d99-b5ab-6cfbdeaed750" 00:16:07.668 ], 00:16:07.668 "product_name": "Malloc disk", 00:16:07.668 "block_size": 512, 00:16:07.668 "num_blocks": 65536, 00:16:07.668 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:07.668 "assigned_rate_limits": { 00:16:07.668 "rw_ios_per_sec": 0, 00:16:07.668 "rw_mbytes_per_sec": 0, 00:16:07.668 "r_mbytes_per_sec": 0, 00:16:07.668 "w_mbytes_per_sec": 0 00:16:07.668 }, 00:16:07.668 "claimed": true, 00:16:07.668 "claim_type": "exclusive_write", 00:16:07.668 "zoned": false, 00:16:07.668 "supported_io_types": { 00:16:07.668 "read": true, 00:16:07.668 "write": true, 00:16:07.668 "unmap": true, 00:16:07.668 "write_zeroes": true, 00:16:07.668 "flush": true, 00:16:07.668 "reset": true, 00:16:07.668 "compare": false, 00:16:07.668 "compare_and_write": false, 00:16:07.668 "abort": true, 00:16:07.668 "nvme_admin": false, 00:16:07.668 "nvme_io": false 00:16:07.668 }, 00:16:07.668 "memory_domains": [ 00:16:07.668 { 00:16:07.668 "dma_device_id": "system", 00:16:07.668 "dma_device_type": 1 00:16:07.668 }, 00:16:07.668 { 00:16:07.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.668 "dma_device_type": 2 00:16:07.668 } 00:16:07.668 ], 00:16:07.668 "driver_specific": {} 00:16:07.668 }' 00:16:07.668 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.668 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.668 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:07.668 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.925 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.925 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.925 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.925 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.183 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:08.183 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:08.183 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:08.183 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:08.441 "name": "BaseBdev4", 00:16:08.441 "aliases": [ 00:16:08.441 "402dab4d-2241-476b-a7fb-cc522715f96a" 00:16:08.441 ], 00:16:08.441 "product_name": "Malloc disk", 00:16:08.441 "block_size": 512, 00:16:08.441 "num_blocks": 65536, 00:16:08.441 "uuid": "402dab4d-2241-476b-a7fb-cc522715f96a", 00:16:08.441 "assigned_rate_limits": { 00:16:08.441 "rw_ios_per_sec": 0, 00:16:08.441 "rw_mbytes_per_sec": 0, 00:16:08.441 "r_mbytes_per_sec": 0, 00:16:08.441 "w_mbytes_per_sec": 0 00:16:08.441 }, 00:16:08.441 "claimed": true, 00:16:08.441 "claim_type": "exclusive_write", 00:16:08.441 "zoned": false, 00:16:08.441 "supported_io_types": { 00:16:08.441 "read": true, 00:16:08.441 "write": true, 00:16:08.441 "unmap": true, 00:16:08.441 "write_zeroes": true, 00:16:08.441 "flush": true, 00:16:08.441 "reset": true, 00:16:08.441 "compare": false, 00:16:08.441 "compare_and_write": false, 00:16:08.441 "abort": true, 00:16:08.441 "nvme_admin": false, 00:16:08.441 "nvme_io": false 00:16:08.441 }, 00:16:08.441 "memory_domains": [ 00:16:08.441 { 00:16:08.441 "dma_device_id": "system", 00:16:08.441 "dma_device_type": 1 00:16:08.441 }, 00:16:08.441 { 00:16:08.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.441 "dma_device_type": 2 00:16:08.441 } 00:16:08.441 ], 00:16:08.441 "driver_specific": {} 00:16:08.441 }' 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.441 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.700 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.700 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.700 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.700 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:08.700 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.959 [2024-05-15 03:10:39.915762] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.959 [2024-05-15 03:10:39.915786] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.959 [2024-05-15 03:10:39.915833] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.959 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.218 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.218 "name": "Existed_Raid", 00:16:09.218 "uuid": "d27c3d9e-7d88-41ed-8d99-13a523733573", 00:16:09.218 "strip_size_kb": 64, 00:16:09.218 "state": "offline", 00:16:09.218 "raid_level": "raid0", 00:16:09.218 "superblock": true, 00:16:09.218 "num_base_bdevs": 4, 00:16:09.218 "num_base_bdevs_discovered": 3, 00:16:09.218 "num_base_bdevs_operational": 3, 00:16:09.218 "base_bdevs_list": [ 00:16:09.218 { 00:16:09.218 "name": null, 00:16:09.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.218 "is_configured": false, 00:16:09.218 "data_offset": 2048, 00:16:09.218 "data_size": 63488 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": "BaseBdev2", 00:16:09.218 "uuid": "c2cf3592-d8eb-4104-af15-17dc968c8ce5", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 2048, 00:16:09.218 "data_size": 63488 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": "BaseBdev3", 00:16:09.218 "uuid": "654fa088-2857-4d99-b5ab-6cfbdeaed750", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 2048, 00:16:09.218 "data_size": 63488 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": "BaseBdev4", 00:16:09.218 "uuid": "402dab4d-2241-476b-a7fb-cc522715f96a", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 2048, 00:16:09.218 "data_size": 63488 00:16:09.218 } 00:16:09.218 ] 00:16:09.218 }' 00:16:09.218 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.218 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.784 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:09.784 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.784 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.784 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:10.043 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:10.043 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.044 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:10.302 [2024-05-15 03:10:41.304599] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.302 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:10.302 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.302 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.302 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:10.561 [2024-05-15 03:10:41.628003] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.561 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:10.819 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:10.819 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.819 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:11.078 [2024-05-15 03:10:42.039417] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:11.078 [2024-05-15 03:10:42.039457] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ce670 name Existed_Raid, state offline 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:11.078 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.337 BaseBdev2 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:11.337 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.595 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.855 [ 00:16:11.855 { 00:16:11.855 "name": "BaseBdev2", 00:16:11.855 "aliases": [ 00:16:11.855 "d07a54ae-f57e-4a79-a540-eddcf96f33d2" 00:16:11.855 ], 00:16:11.855 "product_name": "Malloc disk", 00:16:11.855 "block_size": 512, 00:16:11.855 "num_blocks": 65536, 00:16:11.855 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:11.855 "assigned_rate_limits": { 00:16:11.855 "rw_ios_per_sec": 0, 00:16:11.855 "rw_mbytes_per_sec": 0, 00:16:11.855 "r_mbytes_per_sec": 0, 00:16:11.855 "w_mbytes_per_sec": 0 00:16:11.855 }, 00:16:11.855 "claimed": false, 00:16:11.855 "zoned": false, 00:16:11.855 "supported_io_types": { 00:16:11.855 "read": true, 00:16:11.855 "write": true, 00:16:11.855 "unmap": true, 00:16:11.855 "write_zeroes": true, 00:16:11.855 "flush": true, 00:16:11.855 "reset": true, 00:16:11.855 "compare": false, 00:16:11.855 "compare_and_write": false, 00:16:11.855 "abort": true, 00:16:11.855 "nvme_admin": false, 00:16:11.855 "nvme_io": false 00:16:11.855 }, 00:16:11.855 "memory_domains": [ 00:16:11.855 { 00:16:11.855 "dma_device_id": "system", 00:16:11.855 "dma_device_type": 1 00:16:11.855 }, 00:16:11.855 { 00:16:11.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.855 "dma_device_type": 2 00:16:11.855 } 00:16:11.855 ], 00:16:11.855 "driver_specific": {} 00:16:11.855 } 00:16:11.855 ] 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.855 BaseBdev3 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:11.855 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.113 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.370 [ 00:16:12.370 { 00:16:12.370 "name": "BaseBdev3", 00:16:12.370 "aliases": [ 00:16:12.370 "5b1278c2-8ce3-4946-ad35-37ba4acc7a10" 00:16:12.370 ], 00:16:12.370 "product_name": "Malloc disk", 00:16:12.370 "block_size": 512, 00:16:12.370 "num_blocks": 65536, 00:16:12.370 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:12.370 "assigned_rate_limits": { 00:16:12.370 "rw_ios_per_sec": 0, 00:16:12.370 "rw_mbytes_per_sec": 0, 00:16:12.370 "r_mbytes_per_sec": 0, 00:16:12.370 "w_mbytes_per_sec": 0 00:16:12.370 }, 00:16:12.370 "claimed": false, 00:16:12.370 "zoned": false, 00:16:12.370 "supported_io_types": { 00:16:12.370 "read": true, 00:16:12.370 "write": true, 00:16:12.370 "unmap": true, 00:16:12.370 "write_zeroes": true, 00:16:12.370 "flush": true, 00:16:12.370 "reset": true, 00:16:12.370 "compare": false, 00:16:12.370 "compare_and_write": false, 00:16:12.370 "abort": true, 00:16:12.370 "nvme_admin": false, 00:16:12.370 "nvme_io": false 00:16:12.370 }, 00:16:12.370 "memory_domains": [ 00:16:12.370 { 00:16:12.370 "dma_device_id": "system", 00:16:12.370 "dma_device_type": 1 00:16:12.370 }, 00:16:12.370 { 00:16:12.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.370 "dma_device_type": 2 00:16:12.370 } 00:16:12.370 ], 00:16:12.370 "driver_specific": {} 00:16:12.370 } 00:16:12.370 ] 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:12.370 BaseBdev4 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:12.370 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.627 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:12.884 [ 00:16:12.884 { 00:16:12.884 "name": "BaseBdev4", 00:16:12.884 "aliases": [ 00:16:12.884 "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812" 00:16:12.884 ], 00:16:12.884 "product_name": "Malloc disk", 00:16:12.884 "block_size": 512, 00:16:12.884 "num_blocks": 65536, 00:16:12.884 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:12.884 "assigned_rate_limits": { 00:16:12.884 "rw_ios_per_sec": 0, 00:16:12.884 "rw_mbytes_per_sec": 0, 00:16:12.884 "r_mbytes_per_sec": 0, 00:16:12.884 "w_mbytes_per_sec": 0 00:16:12.884 }, 00:16:12.884 "claimed": false, 00:16:12.884 "zoned": false, 00:16:12.884 "supported_io_types": { 00:16:12.884 "read": true, 00:16:12.884 "write": true, 00:16:12.884 "unmap": true, 00:16:12.884 "write_zeroes": true, 00:16:12.884 "flush": true, 00:16:12.884 "reset": true, 00:16:12.884 "compare": false, 00:16:12.884 "compare_and_write": false, 00:16:12.884 "abort": true, 00:16:12.884 "nvme_admin": false, 00:16:12.884 "nvme_io": false 00:16:12.884 }, 00:16:12.884 "memory_domains": [ 00:16:12.884 { 00:16:12.884 "dma_device_id": "system", 00:16:12.884 "dma_device_type": 1 00:16:12.884 }, 00:16:12.884 { 00:16:12.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.884 "dma_device_type": 2 00:16:12.884 } 00:16:12.884 ], 00:16:12.884 "driver_specific": {} 00:16:12.884 } 00:16:12.884 ] 00:16:12.884 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:12.884 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:12.884 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:12.884 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:13.141 [2024-05-15 03:10:44.132529] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.141 [2024-05-15 03:10:44.132565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.141 [2024-05-15 03:10:44.132582] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.141 [2024-05-15 03:10:44.133978] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.141 [2024-05-15 03:10:44.134020] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.141 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.397 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.397 "name": "Existed_Raid", 00:16:13.397 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:13.397 "strip_size_kb": 64, 00:16:13.397 "state": "configuring", 00:16:13.397 "raid_level": "raid0", 00:16:13.397 "superblock": true, 00:16:13.397 "num_base_bdevs": 4, 00:16:13.397 "num_base_bdevs_discovered": 3, 00:16:13.397 "num_base_bdevs_operational": 4, 00:16:13.397 "base_bdevs_list": [ 00:16:13.397 { 00:16:13.397 "name": "BaseBdev1", 00:16:13.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.397 "is_configured": false, 00:16:13.397 "data_offset": 0, 00:16:13.397 "data_size": 0 00:16:13.397 }, 00:16:13.397 { 00:16:13.397 "name": "BaseBdev2", 00:16:13.397 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:13.397 "is_configured": true, 00:16:13.397 "data_offset": 2048, 00:16:13.397 "data_size": 63488 00:16:13.397 }, 00:16:13.397 { 00:16:13.397 "name": "BaseBdev3", 00:16:13.397 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:13.397 "is_configured": true, 00:16:13.397 "data_offset": 2048, 00:16:13.397 "data_size": 63488 00:16:13.397 }, 00:16:13.397 { 00:16:13.397 "name": "BaseBdev4", 00:16:13.397 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:13.397 "is_configured": true, 00:16:13.397 "data_offset": 2048, 00:16:13.397 "data_size": 63488 00:16:13.397 } 00:16:13.397 ] 00:16:13.397 }' 00:16:13.397 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.397 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.962 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:14.221 [2024-05-15 03:10:45.255534] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.221 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.479 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.479 "name": "Existed_Raid", 00:16:14.479 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:14.479 "strip_size_kb": 64, 00:16:14.479 "state": "configuring", 00:16:14.479 "raid_level": "raid0", 00:16:14.479 "superblock": true, 00:16:14.479 "num_base_bdevs": 4, 00:16:14.479 "num_base_bdevs_discovered": 2, 00:16:14.479 "num_base_bdevs_operational": 4, 00:16:14.479 "base_bdevs_list": [ 00:16:14.479 { 00:16:14.479 "name": "BaseBdev1", 00:16:14.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.479 "is_configured": false, 00:16:14.479 "data_offset": 0, 00:16:14.479 "data_size": 0 00:16:14.479 }, 00:16:14.479 { 00:16:14.479 "name": null, 00:16:14.479 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:14.479 "is_configured": false, 00:16:14.479 "data_offset": 2048, 00:16:14.479 "data_size": 63488 00:16:14.479 }, 00:16:14.479 { 00:16:14.479 "name": "BaseBdev3", 00:16:14.479 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:14.479 "is_configured": true, 00:16:14.479 "data_offset": 2048, 00:16:14.479 "data_size": 63488 00:16:14.479 }, 00:16:14.479 { 00:16:14.479 "name": "BaseBdev4", 00:16:14.479 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:14.479 "is_configured": true, 00:16:14.479 "data_offset": 2048, 00:16:14.479 "data_size": 63488 00:16:14.479 } 00:16:14.479 ] 00:16:14.479 }' 00:16:14.479 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.479 03:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.044 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.044 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.303 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:16:15.303 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.560 [2024-05-15 03:10:46.650602] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.560 BaseBdev1 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:15.560 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.817 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.817 [ 00:16:15.817 { 00:16:15.817 "name": "BaseBdev1", 00:16:15.817 "aliases": [ 00:16:15.817 "48d7bf63-ab6a-4693-a6ff-3e51c4b21407" 00:16:15.817 ], 00:16:15.817 "product_name": "Malloc disk", 00:16:15.817 "block_size": 512, 00:16:15.817 "num_blocks": 65536, 00:16:15.817 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:15.817 "assigned_rate_limits": { 00:16:15.817 "rw_ios_per_sec": 0, 00:16:15.817 "rw_mbytes_per_sec": 0, 00:16:15.817 "r_mbytes_per_sec": 0, 00:16:15.817 "w_mbytes_per_sec": 0 00:16:15.817 }, 00:16:15.817 "claimed": true, 00:16:15.817 "claim_type": "exclusive_write", 00:16:15.817 "zoned": false, 00:16:15.817 "supported_io_types": { 00:16:15.817 "read": true, 00:16:15.817 "write": true, 00:16:15.817 "unmap": true, 00:16:15.817 "write_zeroes": true, 00:16:15.817 "flush": true, 00:16:15.817 "reset": true, 00:16:15.817 "compare": false, 00:16:15.817 "compare_and_write": false, 00:16:15.817 "abort": true, 00:16:15.817 "nvme_admin": false, 00:16:15.817 "nvme_io": false 00:16:15.817 }, 00:16:15.817 "memory_domains": [ 00:16:15.817 { 00:16:15.817 "dma_device_id": "system", 00:16:15.817 "dma_device_type": 1 00:16:15.817 }, 00:16:15.817 { 00:16:15.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.817 "dma_device_type": 2 00:16:15.817 } 00:16:15.817 ], 00:16:15.817 "driver_specific": {} 00:16:15.817 } 00:16:15.817 ] 00:16:15.817 03:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:15.817 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.817 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.817 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.818 03:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.076 03:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.076 "name": "Existed_Raid", 00:16:16.076 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:16.076 "strip_size_kb": 64, 00:16:16.076 "state": "configuring", 00:16:16.076 "raid_level": "raid0", 00:16:16.076 "superblock": true, 00:16:16.076 "num_base_bdevs": 4, 00:16:16.076 "num_base_bdevs_discovered": 3, 00:16:16.076 "num_base_bdevs_operational": 4, 00:16:16.076 "base_bdevs_list": [ 00:16:16.076 { 00:16:16.076 "name": "BaseBdev1", 00:16:16.076 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:16.076 "is_configured": true, 00:16:16.076 "data_offset": 2048, 00:16:16.076 "data_size": 63488 00:16:16.076 }, 00:16:16.076 { 00:16:16.076 "name": null, 00:16:16.076 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:16.076 "is_configured": false, 00:16:16.077 "data_offset": 2048, 00:16:16.077 "data_size": 63488 00:16:16.077 }, 00:16:16.077 { 00:16:16.077 "name": "BaseBdev3", 00:16:16.077 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:16.077 "is_configured": true, 00:16:16.077 "data_offset": 2048, 00:16:16.077 "data_size": 63488 00:16:16.077 }, 00:16:16.077 { 00:16:16.077 "name": "BaseBdev4", 00:16:16.077 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:16.077 "is_configured": true, 00:16:16.077 "data_offset": 2048, 00:16:16.077 "data_size": 63488 00:16:16.077 } 00:16:16.077 ] 00:16:16.077 }' 00:16:16.077 03:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.077 03:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.010 03:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.010 03:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:17.010 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:17.010 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:17.268 [2024-05-15 03:10:48.343193] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.269 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.526 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.526 "name": "Existed_Raid", 00:16:17.526 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:17.526 "strip_size_kb": 64, 00:16:17.526 "state": "configuring", 00:16:17.526 "raid_level": "raid0", 00:16:17.526 "superblock": true, 00:16:17.526 "num_base_bdevs": 4, 00:16:17.526 "num_base_bdevs_discovered": 2, 00:16:17.526 "num_base_bdevs_operational": 4, 00:16:17.526 "base_bdevs_list": [ 00:16:17.526 { 00:16:17.526 "name": "BaseBdev1", 00:16:17.526 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:17.526 "is_configured": true, 00:16:17.526 "data_offset": 2048, 00:16:17.526 "data_size": 63488 00:16:17.526 }, 00:16:17.526 { 00:16:17.526 "name": null, 00:16:17.526 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:17.526 "is_configured": false, 00:16:17.526 "data_offset": 2048, 00:16:17.526 "data_size": 63488 00:16:17.526 }, 00:16:17.526 { 00:16:17.526 "name": null, 00:16:17.526 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:17.526 "is_configured": false, 00:16:17.526 "data_offset": 2048, 00:16:17.526 "data_size": 63488 00:16:17.526 }, 00:16:17.526 { 00:16:17.526 "name": "BaseBdev4", 00:16:17.526 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:17.526 "is_configured": true, 00:16:17.526 "data_offset": 2048, 00:16:17.526 "data_size": 63488 00:16:17.526 } 00:16:17.526 ] 00:16:17.526 }' 00:16:17.526 03:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.526 03:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.127 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.127 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.385 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:18.385 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:18.643 [2024-05-15 03:10:49.734950] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.643 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.644 03:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.901 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.901 "name": "Existed_Raid", 00:16:18.901 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:18.901 "strip_size_kb": 64, 00:16:18.901 "state": "configuring", 00:16:18.901 "raid_level": "raid0", 00:16:18.901 "superblock": true, 00:16:18.901 "num_base_bdevs": 4, 00:16:18.901 "num_base_bdevs_discovered": 3, 00:16:18.901 "num_base_bdevs_operational": 4, 00:16:18.901 "base_bdevs_list": [ 00:16:18.901 { 00:16:18.901 "name": "BaseBdev1", 00:16:18.901 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:18.901 "is_configured": true, 00:16:18.901 "data_offset": 2048, 00:16:18.901 "data_size": 63488 00:16:18.901 }, 00:16:18.901 { 00:16:18.901 "name": null, 00:16:18.901 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:18.901 "is_configured": false, 00:16:18.901 "data_offset": 2048, 00:16:18.901 "data_size": 63488 00:16:18.901 }, 00:16:18.901 { 00:16:18.901 "name": "BaseBdev3", 00:16:18.901 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:18.901 "is_configured": true, 00:16:18.901 "data_offset": 2048, 00:16:18.901 "data_size": 63488 00:16:18.901 }, 00:16:18.901 { 00:16:18.901 "name": "BaseBdev4", 00:16:18.901 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:18.901 "is_configured": true, 00:16:18.901 "data_offset": 2048, 00:16:18.901 "data_size": 63488 00:16:18.901 } 00:16:18.901 ] 00:16:18.901 }' 00:16:18.901 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.901 03:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.466 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.466 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.723 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:19.723 03:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:19.981 [2024-05-15 03:10:51.110636] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.239 "name": "Existed_Raid", 00:16:20.239 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:20.239 "strip_size_kb": 64, 00:16:20.239 "state": "configuring", 00:16:20.239 "raid_level": "raid0", 00:16:20.239 "superblock": true, 00:16:20.239 "num_base_bdevs": 4, 00:16:20.239 "num_base_bdevs_discovered": 2, 00:16:20.239 "num_base_bdevs_operational": 4, 00:16:20.239 "base_bdevs_list": [ 00:16:20.239 { 00:16:20.239 "name": null, 00:16:20.239 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:20.239 "is_configured": false, 00:16:20.239 "data_offset": 2048, 00:16:20.239 "data_size": 63488 00:16:20.239 }, 00:16:20.239 { 00:16:20.239 "name": null, 00:16:20.239 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:20.239 "is_configured": false, 00:16:20.239 "data_offset": 2048, 00:16:20.239 "data_size": 63488 00:16:20.239 }, 00:16:20.239 { 00:16:20.239 "name": "BaseBdev3", 00:16:20.239 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:20.239 "is_configured": true, 00:16:20.239 "data_offset": 2048, 00:16:20.239 "data_size": 63488 00:16:20.239 }, 00:16:20.239 { 00:16:20.239 "name": "BaseBdev4", 00:16:20.239 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:20.239 "is_configured": true, 00:16:20.239 "data_offset": 2048, 00:16:20.239 "data_size": 63488 00:16:20.239 } 00:16:20.239 ] 00:16:20.239 }' 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.239 03:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.172 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:21.172 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.172 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:21.172 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:21.430 [2024-05-15 03:10:52.508900] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.430 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.688 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.688 "name": "Existed_Raid", 00:16:21.688 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:21.688 "strip_size_kb": 64, 00:16:21.688 "state": "configuring", 00:16:21.688 "raid_level": "raid0", 00:16:21.688 "superblock": true, 00:16:21.688 "num_base_bdevs": 4, 00:16:21.688 "num_base_bdevs_discovered": 3, 00:16:21.688 "num_base_bdevs_operational": 4, 00:16:21.688 "base_bdevs_list": [ 00:16:21.688 { 00:16:21.688 "name": null, 00:16:21.688 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:21.688 "is_configured": false, 00:16:21.688 "data_offset": 2048, 00:16:21.688 "data_size": 63488 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev2", 00:16:21.688 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 2048, 00:16:21.688 "data_size": 63488 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev3", 00:16:21.688 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 2048, 00:16:21.688 "data_size": 63488 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev4", 00:16:21.688 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 2048, 00:16:21.688 "data_size": 63488 00:16:21.688 } 00:16:21.688 ] 00:16:21.688 }' 00:16:21.688 03:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.688 03:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.253 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.509 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.509 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:22.509 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.509 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:22.766 03:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 48d7bf63-ab6a-4693-a6ff-3e51c4b21407 00:16:23.023 [2024-05-15 03:10:54.148566] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:23.023 [2024-05-15 03:10:54.148723] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x12d0060 00:16:23.023 [2024-05-15 03:10:54.148735] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:23.023 [2024-05-15 03:10:54.148940] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12d0ac0 00:16:23.023 [2024-05-15 03:10:54.149071] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12d0060 00:16:23.023 [2024-05-15 03:10:54.149080] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12d0060 00:16:23.023 [2024-05-15 03:10:54.149173] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.023 NewBaseBdev 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:23.023 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.280 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:23.537 [ 00:16:23.537 { 00:16:23.537 "name": "NewBaseBdev", 00:16:23.537 "aliases": [ 00:16:23.537 "48d7bf63-ab6a-4693-a6ff-3e51c4b21407" 00:16:23.537 ], 00:16:23.537 "product_name": "Malloc disk", 00:16:23.537 "block_size": 512, 00:16:23.537 "num_blocks": 65536, 00:16:23.537 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:23.537 "assigned_rate_limits": { 00:16:23.537 "rw_ios_per_sec": 0, 00:16:23.537 "rw_mbytes_per_sec": 0, 00:16:23.537 "r_mbytes_per_sec": 0, 00:16:23.537 "w_mbytes_per_sec": 0 00:16:23.537 }, 00:16:23.537 "claimed": true, 00:16:23.537 "claim_type": "exclusive_write", 00:16:23.537 "zoned": false, 00:16:23.537 "supported_io_types": { 00:16:23.537 "read": true, 00:16:23.537 "write": true, 00:16:23.537 "unmap": true, 00:16:23.537 "write_zeroes": true, 00:16:23.537 "flush": true, 00:16:23.537 "reset": true, 00:16:23.537 "compare": false, 00:16:23.537 "compare_and_write": false, 00:16:23.537 "abort": true, 00:16:23.537 "nvme_admin": false, 00:16:23.537 "nvme_io": false 00:16:23.537 }, 00:16:23.537 "memory_domains": [ 00:16:23.537 { 00:16:23.537 "dma_device_id": "system", 00:16:23.537 "dma_device_type": 1 00:16:23.537 }, 00:16:23.537 { 00:16:23.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.537 "dma_device_type": 2 00:16:23.537 } 00:16:23.537 ], 00:16:23.537 "driver_specific": {} 00:16:23.537 } 00:16:23.537 ] 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.537 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.538 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.538 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.538 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.795 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.795 "name": "Existed_Raid", 00:16:23.795 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:23.795 "strip_size_kb": 64, 00:16:23.795 "state": "online", 00:16:23.795 "raid_level": "raid0", 00:16:23.795 "superblock": true, 00:16:23.795 "num_base_bdevs": 4, 00:16:23.795 "num_base_bdevs_discovered": 4, 00:16:23.795 "num_base_bdevs_operational": 4, 00:16:23.795 "base_bdevs_list": [ 00:16:23.795 { 00:16:23.795 "name": "NewBaseBdev", 00:16:23.795 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:23.795 "is_configured": true, 00:16:23.795 "data_offset": 2048, 00:16:23.795 "data_size": 63488 00:16:23.795 }, 00:16:23.795 { 00:16:23.795 "name": "BaseBdev2", 00:16:23.795 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:23.795 "is_configured": true, 00:16:23.795 "data_offset": 2048, 00:16:23.795 "data_size": 63488 00:16:23.795 }, 00:16:23.795 { 00:16:23.795 "name": "BaseBdev3", 00:16:23.795 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:23.795 "is_configured": true, 00:16:23.795 "data_offset": 2048, 00:16:23.795 "data_size": 63488 00:16:23.795 }, 00:16:23.795 { 00:16:23.795 "name": "BaseBdev4", 00:16:23.795 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:23.795 "is_configured": true, 00:16:23.795 "data_offset": 2048, 00:16:23.795 "data_size": 63488 00:16:23.795 } 00:16:23.795 ] 00:16:23.795 }' 00:16:23.795 03:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.795 03:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:24.727 [2024-05-15 03:10:55.793313] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.727 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:24.727 "name": "Existed_Raid", 00:16:24.727 "aliases": [ 00:16:24.727 "543cdc9d-6787-435f-9a46-3ac193d5cc85" 00:16:24.727 ], 00:16:24.727 "product_name": "Raid Volume", 00:16:24.727 "block_size": 512, 00:16:24.727 "num_blocks": 253952, 00:16:24.727 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:24.727 "assigned_rate_limits": { 00:16:24.727 "rw_ios_per_sec": 0, 00:16:24.727 "rw_mbytes_per_sec": 0, 00:16:24.727 "r_mbytes_per_sec": 0, 00:16:24.727 "w_mbytes_per_sec": 0 00:16:24.727 }, 00:16:24.727 "claimed": false, 00:16:24.727 "zoned": false, 00:16:24.727 "supported_io_types": { 00:16:24.727 "read": true, 00:16:24.727 "write": true, 00:16:24.727 "unmap": true, 00:16:24.727 "write_zeroes": true, 00:16:24.727 "flush": true, 00:16:24.727 "reset": true, 00:16:24.727 "compare": false, 00:16:24.727 "compare_and_write": false, 00:16:24.727 "abort": false, 00:16:24.727 "nvme_admin": false, 00:16:24.727 "nvme_io": false 00:16:24.727 }, 00:16:24.727 "memory_domains": [ 00:16:24.727 { 00:16:24.727 "dma_device_id": "system", 00:16:24.727 "dma_device_type": 1 00:16:24.727 }, 00:16:24.727 { 00:16:24.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.728 "dma_device_type": 2 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "system", 00:16:24.728 "dma_device_type": 1 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.728 "dma_device_type": 2 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "system", 00:16:24.728 "dma_device_type": 1 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.728 "dma_device_type": 2 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "system", 00:16:24.728 "dma_device_type": 1 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.728 "dma_device_type": 2 00:16:24.728 } 00:16:24.728 ], 00:16:24.728 "driver_specific": { 00:16:24.728 "raid": { 00:16:24.728 "uuid": "543cdc9d-6787-435f-9a46-3ac193d5cc85", 00:16:24.728 "strip_size_kb": 64, 00:16:24.728 "state": "online", 00:16:24.728 "raid_level": "raid0", 00:16:24.728 "superblock": true, 00:16:24.728 "num_base_bdevs": 4, 00:16:24.728 "num_base_bdevs_discovered": 4, 00:16:24.728 "num_base_bdevs_operational": 4, 00:16:24.728 "base_bdevs_list": [ 00:16:24.728 { 00:16:24.728 "name": "NewBaseBdev", 00:16:24.728 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:24.728 "is_configured": true, 00:16:24.728 "data_offset": 2048, 00:16:24.728 "data_size": 63488 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "name": "BaseBdev2", 00:16:24.728 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:24.728 "is_configured": true, 00:16:24.728 "data_offset": 2048, 00:16:24.728 "data_size": 63488 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "name": "BaseBdev3", 00:16:24.728 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:24.728 "is_configured": true, 00:16:24.728 "data_offset": 2048, 00:16:24.728 "data_size": 63488 00:16:24.728 }, 00:16:24.728 { 00:16:24.728 "name": "BaseBdev4", 00:16:24.728 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:24.728 "is_configured": true, 00:16:24.728 "data_offset": 2048, 00:16:24.728 "data_size": 63488 00:16:24.728 } 00:16:24.728 ] 00:16:24.728 } 00:16:24.728 } 00:16:24.728 }' 00:16:24.728 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.728 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:24.728 BaseBdev2 00:16:24.728 BaseBdev3 00:16:24.728 BaseBdev4' 00:16:24.728 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:24.728 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:24.728 03:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:24.985 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:24.985 "name": "NewBaseBdev", 00:16:24.985 "aliases": [ 00:16:24.985 "48d7bf63-ab6a-4693-a6ff-3e51c4b21407" 00:16:24.985 ], 00:16:24.985 "product_name": "Malloc disk", 00:16:24.985 "block_size": 512, 00:16:24.985 "num_blocks": 65536, 00:16:24.985 "uuid": "48d7bf63-ab6a-4693-a6ff-3e51c4b21407", 00:16:24.985 "assigned_rate_limits": { 00:16:24.985 "rw_ios_per_sec": 0, 00:16:24.985 "rw_mbytes_per_sec": 0, 00:16:24.985 "r_mbytes_per_sec": 0, 00:16:24.985 "w_mbytes_per_sec": 0 00:16:24.985 }, 00:16:24.985 "claimed": true, 00:16:24.985 "claim_type": "exclusive_write", 00:16:24.985 "zoned": false, 00:16:24.985 "supported_io_types": { 00:16:24.985 "read": true, 00:16:24.985 "write": true, 00:16:24.985 "unmap": true, 00:16:24.985 "write_zeroes": true, 00:16:24.985 "flush": true, 00:16:24.985 "reset": true, 00:16:24.985 "compare": false, 00:16:24.985 "compare_and_write": false, 00:16:24.985 "abort": true, 00:16:24.985 "nvme_admin": false, 00:16:24.985 "nvme_io": false 00:16:24.985 }, 00:16:24.985 "memory_domains": [ 00:16:24.985 { 00:16:24.985 "dma_device_id": "system", 00:16:24.985 "dma_device_type": 1 00:16:24.985 }, 00:16:24.985 { 00:16:24.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.985 "dma_device_type": 2 00:16:24.985 } 00:16:24.985 ], 00:16:24.985 "driver_specific": {} 00:16:24.985 }' 00:16:24.985 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.242 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:25.500 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:25.758 "name": "BaseBdev2", 00:16:25.758 "aliases": [ 00:16:25.758 "d07a54ae-f57e-4a79-a540-eddcf96f33d2" 00:16:25.758 ], 00:16:25.758 "product_name": "Malloc disk", 00:16:25.758 "block_size": 512, 00:16:25.758 "num_blocks": 65536, 00:16:25.758 "uuid": "d07a54ae-f57e-4a79-a540-eddcf96f33d2", 00:16:25.758 "assigned_rate_limits": { 00:16:25.758 "rw_ios_per_sec": 0, 00:16:25.758 "rw_mbytes_per_sec": 0, 00:16:25.758 "r_mbytes_per_sec": 0, 00:16:25.758 "w_mbytes_per_sec": 0 00:16:25.758 }, 00:16:25.758 "claimed": true, 00:16:25.758 "claim_type": "exclusive_write", 00:16:25.758 "zoned": false, 00:16:25.758 "supported_io_types": { 00:16:25.758 "read": true, 00:16:25.758 "write": true, 00:16:25.758 "unmap": true, 00:16:25.758 "write_zeroes": true, 00:16:25.758 "flush": true, 00:16:25.758 "reset": true, 00:16:25.758 "compare": false, 00:16:25.758 "compare_and_write": false, 00:16:25.758 "abort": true, 00:16:25.758 "nvme_admin": false, 00:16:25.758 "nvme_io": false 00:16:25.758 }, 00:16:25.758 "memory_domains": [ 00:16:25.758 { 00:16:25.758 "dma_device_id": "system", 00:16:25.758 "dma_device_type": 1 00:16:25.758 }, 00:16:25.758 { 00:16:25.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.758 "dma_device_type": 2 00:16:25.758 } 00:16:25.758 ], 00:16:25.758 "driver_specific": {} 00:16:25.758 }' 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.758 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:26.016 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.016 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.016 03:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:26.016 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:26.273 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:26.273 "name": "BaseBdev3", 00:16:26.273 "aliases": [ 00:16:26.273 "5b1278c2-8ce3-4946-ad35-37ba4acc7a10" 00:16:26.273 ], 00:16:26.273 "product_name": "Malloc disk", 00:16:26.273 "block_size": 512, 00:16:26.273 "num_blocks": 65536, 00:16:26.273 "uuid": "5b1278c2-8ce3-4946-ad35-37ba4acc7a10", 00:16:26.273 "assigned_rate_limits": { 00:16:26.273 "rw_ios_per_sec": 0, 00:16:26.273 "rw_mbytes_per_sec": 0, 00:16:26.273 "r_mbytes_per_sec": 0, 00:16:26.273 "w_mbytes_per_sec": 0 00:16:26.273 }, 00:16:26.273 "claimed": true, 00:16:26.273 "claim_type": "exclusive_write", 00:16:26.273 "zoned": false, 00:16:26.273 "supported_io_types": { 00:16:26.273 "read": true, 00:16:26.273 "write": true, 00:16:26.273 "unmap": true, 00:16:26.273 "write_zeroes": true, 00:16:26.273 "flush": true, 00:16:26.273 "reset": true, 00:16:26.273 "compare": false, 00:16:26.273 "compare_and_write": false, 00:16:26.273 "abort": true, 00:16:26.273 "nvme_admin": false, 00:16:26.273 "nvme_io": false 00:16:26.273 }, 00:16:26.273 "memory_domains": [ 00:16:26.273 { 00:16:26.273 "dma_device_id": "system", 00:16:26.273 "dma_device_type": 1 00:16:26.273 }, 00:16:26.273 { 00:16:26.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.273 "dma_device_type": 2 00:16:26.273 } 00:16:26.273 ], 00:16:26.273 "driver_specific": {} 00:16:26.273 }' 00:16:26.273 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:26.273 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.531 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.789 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:26.789 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:26.789 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:26.789 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:27.046 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:27.047 "name": "BaseBdev4", 00:16:27.047 "aliases": [ 00:16:27.047 "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812" 00:16:27.047 ], 00:16:27.047 "product_name": "Malloc disk", 00:16:27.047 "block_size": 512, 00:16:27.047 "num_blocks": 65536, 00:16:27.047 "uuid": "f790e5e3-bd2e-4ad6-9175-d0d6ee1f1812", 00:16:27.047 "assigned_rate_limits": { 00:16:27.047 "rw_ios_per_sec": 0, 00:16:27.047 "rw_mbytes_per_sec": 0, 00:16:27.047 "r_mbytes_per_sec": 0, 00:16:27.047 "w_mbytes_per_sec": 0 00:16:27.047 }, 00:16:27.047 "claimed": true, 00:16:27.047 "claim_type": "exclusive_write", 00:16:27.047 "zoned": false, 00:16:27.047 "supported_io_types": { 00:16:27.047 "read": true, 00:16:27.047 "write": true, 00:16:27.047 "unmap": true, 00:16:27.047 "write_zeroes": true, 00:16:27.047 "flush": true, 00:16:27.047 "reset": true, 00:16:27.047 "compare": false, 00:16:27.047 "compare_and_write": false, 00:16:27.047 "abort": true, 00:16:27.047 "nvme_admin": false, 00:16:27.047 "nvme_io": false 00:16:27.047 }, 00:16:27.047 "memory_domains": [ 00:16:27.047 { 00:16:27.047 "dma_device_id": "system", 00:16:27.047 "dma_device_type": 1 00:16:27.047 }, 00:16:27.047 { 00:16:27.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.047 "dma_device_type": 2 00:16:27.047 } 00:16:27.047 ], 00:16:27.047 "driver_specific": {} 00:16:27.047 }' 00:16:27.047 03:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:27.047 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:27.304 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.563 [2024-05-15 03:10:58.580460] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.563 [2024-05-15 03:10:58.580485] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.563 [2024-05-15 03:10:58.580534] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.563 [2024-05-15 03:10:58.580598] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.563 [2024-05-15 03:10:58.580606] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12d0060 name Existed_Raid, state offline 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4108902 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4108902 ']' 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4108902 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4108902 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4108902' 00:16:27.563 killing process with pid 4108902 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4108902 00:16:27.563 [2024-05-15 03:10:58.643976] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.563 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4108902 00:16:27.563 [2024-05-15 03:10:58.678976] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.820 03:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:16:27.820 00:16:27.820 real 0m32.354s 00:16:27.820 user 1m0.793s 00:16:27.820 sys 0m4.418s 00:16:27.820 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.820 03:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.820 ************************************ 00:16:27.820 END TEST raid_state_function_test_sb 00:16:27.820 ************************************ 00:16:27.821 03:10:58 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:27.821 03:10:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:27.821 03:10:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.821 03:10:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 ************************************ 00:16:28.079 START TEST raid_superblock_test 00:16:28.079 ************************************ 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4114900 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4114900 /var/tmp/spdk-raid.sock 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4114900 ']' 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.079 03:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 [2024-05-15 03:10:59.037564] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:28.079 [2024-05-15 03:10:59.037619] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114900 ] 00:16:28.079 [2024-05-15 03:10:59.133885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.079 [2024-05-15 03:10:59.227124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.338 [2024-05-15 03:10:59.285200] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.338 [2024-05-15 03:10:59.285236] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.904 03:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:29.162 malloc1 00:16:29.162 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.420 [2024-05-15 03:11:00.478737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.420 [2024-05-15 03:11:00.478779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.420 [2024-05-15 03:11:00.478798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24dba00 00:16:29.420 [2024-05-15 03:11:00.478807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.420 [2024-05-15 03:11:00.480493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.420 [2024-05-15 03:11:00.480520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.420 pt1 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.420 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:29.678 malloc2 00:16:29.678 03:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.935 [2024-05-15 03:11:00.992798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.935 [2024-05-15 03:11:00.992842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.935 [2024-05-15 03:11:00.992870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24dc5f0 00:16:29.935 [2024-05-15 03:11:00.992880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.935 [2024-05-15 03:11:00.994442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.935 [2024-05-15 03:11:00.994468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.935 pt2 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.935 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:30.191 malloc3 00:16:30.191 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.448 [2024-05-15 03:11:01.498632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.448 [2024-05-15 03:11:01.498672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.448 [2024-05-15 03:11:01.498689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2681900 00:16:30.448 [2024-05-15 03:11:01.498699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.448 [2024-05-15 03:11:01.500263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.448 [2024-05-15 03:11:01.500290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.448 pt3 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.448 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:30.706 malloc4 00:16:30.706 03:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.963 [2024-05-15 03:11:02.004487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.963 [2024-05-15 03:11:02.004531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.963 [2024-05-15 03:11:02.004552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d3630 00:16:30.963 [2024-05-15 03:11:02.004561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.963 [2024-05-15 03:11:02.006138] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.964 [2024-05-15 03:11:02.006165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.964 pt4 00:16:30.964 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.964 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.964 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:31.222 [2024-05-15 03:11:02.257183] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.222 [2024-05-15 03:11:02.258531] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.222 [2024-05-15 03:11:02.258588] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.222 [2024-05-15 03:11:02.258634] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:31.222 [2024-05-15 03:11:02.258814] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x24d4900 00:16:31.222 [2024-05-15 03:11:02.258824] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:31.222 [2024-05-15 03:11:02.259034] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24d48d0 00:16:31.222 [2024-05-15 03:11:02.259190] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24d4900 00:16:31.222 [2024-05-15 03:11:02.259199] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24d4900 00:16:31.222 [2024-05-15 03:11:02.259301] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.222 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.479 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.479 "name": "raid_bdev1", 00:16:31.479 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:31.479 "strip_size_kb": 64, 00:16:31.479 "state": "online", 00:16:31.479 "raid_level": "raid0", 00:16:31.479 "superblock": true, 00:16:31.479 "num_base_bdevs": 4, 00:16:31.479 "num_base_bdevs_discovered": 4, 00:16:31.479 "num_base_bdevs_operational": 4, 00:16:31.479 "base_bdevs_list": [ 00:16:31.479 { 00:16:31.479 "name": "pt1", 00:16:31.479 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:31.479 "is_configured": true, 00:16:31.479 "data_offset": 2048, 00:16:31.479 "data_size": 63488 00:16:31.479 }, 00:16:31.479 { 00:16:31.479 "name": "pt2", 00:16:31.479 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:31.479 "is_configured": true, 00:16:31.479 "data_offset": 2048, 00:16:31.479 "data_size": 63488 00:16:31.479 }, 00:16:31.479 { 00:16:31.479 "name": "pt3", 00:16:31.479 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:31.479 "is_configured": true, 00:16:31.479 "data_offset": 2048, 00:16:31.479 "data_size": 63488 00:16:31.479 }, 00:16:31.479 { 00:16:31.479 "name": "pt4", 00:16:31.479 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:31.479 "is_configured": true, 00:16:31.479 "data_offset": 2048, 00:16:31.479 "data_size": 63488 00:16:31.479 } 00:16:31.479 ] 00:16:31.479 }' 00:16:31.479 03:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.479 03:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:32.043 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:32.299 [2024-05-15 03:11:03.388467] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.299 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:32.299 "name": "raid_bdev1", 00:16:32.299 "aliases": [ 00:16:32.299 "27c6502f-166f-4c33-bd04-8d5975e779e3" 00:16:32.299 ], 00:16:32.299 "product_name": "Raid Volume", 00:16:32.300 "block_size": 512, 00:16:32.300 "num_blocks": 253952, 00:16:32.300 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:32.300 "assigned_rate_limits": { 00:16:32.300 "rw_ios_per_sec": 0, 00:16:32.300 "rw_mbytes_per_sec": 0, 00:16:32.300 "r_mbytes_per_sec": 0, 00:16:32.300 "w_mbytes_per_sec": 0 00:16:32.300 }, 00:16:32.300 "claimed": false, 00:16:32.300 "zoned": false, 00:16:32.300 "supported_io_types": { 00:16:32.300 "read": true, 00:16:32.300 "write": true, 00:16:32.300 "unmap": true, 00:16:32.300 "write_zeroes": true, 00:16:32.300 "flush": true, 00:16:32.300 "reset": true, 00:16:32.300 "compare": false, 00:16:32.300 "compare_and_write": false, 00:16:32.300 "abort": false, 00:16:32.300 "nvme_admin": false, 00:16:32.300 "nvme_io": false 00:16:32.300 }, 00:16:32.300 "memory_domains": [ 00:16:32.300 { 00:16:32.300 "dma_device_id": "system", 00:16:32.300 "dma_device_type": 1 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.300 "dma_device_type": 2 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "system", 00:16:32.300 "dma_device_type": 1 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.300 "dma_device_type": 2 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "system", 00:16:32.300 "dma_device_type": 1 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.300 "dma_device_type": 2 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "system", 00:16:32.300 "dma_device_type": 1 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.300 "dma_device_type": 2 00:16:32.300 } 00:16:32.300 ], 00:16:32.300 "driver_specific": { 00:16:32.300 "raid": { 00:16:32.300 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:32.300 "strip_size_kb": 64, 00:16:32.300 "state": "online", 00:16:32.300 "raid_level": "raid0", 00:16:32.300 "superblock": true, 00:16:32.300 "num_base_bdevs": 4, 00:16:32.300 "num_base_bdevs_discovered": 4, 00:16:32.300 "num_base_bdevs_operational": 4, 00:16:32.300 "base_bdevs_list": [ 00:16:32.300 { 00:16:32.300 "name": "pt1", 00:16:32.300 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:32.300 "is_configured": true, 00:16:32.300 "data_offset": 2048, 00:16:32.300 "data_size": 63488 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "name": "pt2", 00:16:32.300 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:32.300 "is_configured": true, 00:16:32.300 "data_offset": 2048, 00:16:32.300 "data_size": 63488 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "name": "pt3", 00:16:32.300 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:32.300 "is_configured": true, 00:16:32.300 "data_offset": 2048, 00:16:32.300 "data_size": 63488 00:16:32.300 }, 00:16:32.300 { 00:16:32.300 "name": "pt4", 00:16:32.300 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:32.300 "is_configured": true, 00:16:32.300 "data_offset": 2048, 00:16:32.300 "data_size": 63488 00:16:32.300 } 00:16:32.300 ] 00:16:32.300 } 00:16:32.300 } 00:16:32.300 }' 00:16:32.300 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.300 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:32.300 pt2 00:16:32.300 pt3 00:16:32.300 pt4' 00:16:32.300 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:32.556 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:32.556 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:32.813 "name": "pt1", 00:16:32.813 "aliases": [ 00:16:32.813 "3387d292-77d1-5f30-9773-7fa2c090fa36" 00:16:32.813 ], 00:16:32.813 "product_name": "passthru", 00:16:32.813 "block_size": 512, 00:16:32.813 "num_blocks": 65536, 00:16:32.813 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:32.813 "assigned_rate_limits": { 00:16:32.813 "rw_ios_per_sec": 0, 00:16:32.813 "rw_mbytes_per_sec": 0, 00:16:32.813 "r_mbytes_per_sec": 0, 00:16:32.813 "w_mbytes_per_sec": 0 00:16:32.813 }, 00:16:32.813 "claimed": true, 00:16:32.813 "claim_type": "exclusive_write", 00:16:32.813 "zoned": false, 00:16:32.813 "supported_io_types": { 00:16:32.813 "read": true, 00:16:32.813 "write": true, 00:16:32.813 "unmap": true, 00:16:32.813 "write_zeroes": true, 00:16:32.813 "flush": true, 00:16:32.813 "reset": true, 00:16:32.813 "compare": false, 00:16:32.813 "compare_and_write": false, 00:16:32.813 "abort": true, 00:16:32.813 "nvme_admin": false, 00:16:32.813 "nvme_io": false 00:16:32.813 }, 00:16:32.813 "memory_domains": [ 00:16:32.813 { 00:16:32.813 "dma_device_id": "system", 00:16:32.813 "dma_device_type": 1 00:16:32.813 }, 00:16:32.813 { 00:16:32.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.813 "dma_device_type": 2 00:16:32.813 } 00:16:32.813 ], 00:16:32.813 "driver_specific": { 00:16:32.813 "passthru": { 00:16:32.813 "name": "pt1", 00:16:32.813 "base_bdev_name": "malloc1" 00:16:32.813 } 00:16:32.813 } 00:16:32.813 }' 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:32.813 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:33.070 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.070 03:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.070 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.070 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:33.070 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:33.070 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:33.070 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:33.327 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:33.327 "name": "pt2", 00:16:33.327 "aliases": [ 00:16:33.327 "58a65cf6-1c6b-520c-9784-499b73022586" 00:16:33.327 ], 00:16:33.327 "product_name": "passthru", 00:16:33.327 "block_size": 512, 00:16:33.327 "num_blocks": 65536, 00:16:33.327 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:33.327 "assigned_rate_limits": { 00:16:33.327 "rw_ios_per_sec": 0, 00:16:33.327 "rw_mbytes_per_sec": 0, 00:16:33.327 "r_mbytes_per_sec": 0, 00:16:33.327 "w_mbytes_per_sec": 0 00:16:33.327 }, 00:16:33.327 "claimed": true, 00:16:33.327 "claim_type": "exclusive_write", 00:16:33.327 "zoned": false, 00:16:33.327 "supported_io_types": { 00:16:33.327 "read": true, 00:16:33.327 "write": true, 00:16:33.327 "unmap": true, 00:16:33.327 "write_zeroes": true, 00:16:33.327 "flush": true, 00:16:33.327 "reset": true, 00:16:33.327 "compare": false, 00:16:33.327 "compare_and_write": false, 00:16:33.327 "abort": true, 00:16:33.327 "nvme_admin": false, 00:16:33.327 "nvme_io": false 00:16:33.327 }, 00:16:33.327 "memory_domains": [ 00:16:33.327 { 00:16:33.328 "dma_device_id": "system", 00:16:33.328 "dma_device_type": 1 00:16:33.328 }, 00:16:33.328 { 00:16:33.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.328 "dma_device_type": 2 00:16:33.328 } 00:16:33.328 ], 00:16:33.328 "driver_specific": { 00:16:33.328 "passthru": { 00:16:33.328 "name": "pt2", 00:16:33.328 "base_bdev_name": "malloc2" 00:16:33.328 } 00:16:33.328 } 00:16:33.328 }' 00:16:33.328 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.328 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.328 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:33.328 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:33.328 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:33.585 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.585 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:33.586 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:33.845 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:33.845 "name": "pt3", 00:16:33.845 "aliases": [ 00:16:33.845 "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d" 00:16:33.845 ], 00:16:33.845 "product_name": "passthru", 00:16:33.845 "block_size": 512, 00:16:33.845 "num_blocks": 65536, 00:16:33.845 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:33.845 "assigned_rate_limits": { 00:16:33.845 "rw_ios_per_sec": 0, 00:16:33.845 "rw_mbytes_per_sec": 0, 00:16:33.845 "r_mbytes_per_sec": 0, 00:16:33.845 "w_mbytes_per_sec": 0 00:16:33.845 }, 00:16:33.845 "claimed": true, 00:16:33.845 "claim_type": "exclusive_write", 00:16:33.845 "zoned": false, 00:16:33.845 "supported_io_types": { 00:16:33.845 "read": true, 00:16:33.845 "write": true, 00:16:33.845 "unmap": true, 00:16:33.845 "write_zeroes": true, 00:16:33.845 "flush": true, 00:16:33.845 "reset": true, 00:16:33.845 "compare": false, 00:16:33.845 "compare_and_write": false, 00:16:33.845 "abort": true, 00:16:33.845 "nvme_admin": false, 00:16:33.845 "nvme_io": false 00:16:33.845 }, 00:16:33.845 "memory_domains": [ 00:16:33.845 { 00:16:33.845 "dma_device_id": "system", 00:16:33.845 "dma_device_type": 1 00:16:33.845 }, 00:16:33.845 { 00:16:33.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.845 "dma_device_type": 2 00:16:33.845 } 00:16:33.845 ], 00:16:33.845 "driver_specific": { 00:16:33.845 "passthru": { 00:16:33.845 "name": "pt3", 00:16:33.845 "base_bdev_name": "malloc3" 00:16:33.845 } 00:16:33.845 } 00:16:33.845 }' 00:16:33.845 03:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.143 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.412 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.412 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:34.412 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:34.412 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:34.412 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:34.669 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:34.669 "name": "pt4", 00:16:34.669 "aliases": [ 00:16:34.669 "8f06579c-74d0-5c48-a098-bb957248c8b3" 00:16:34.669 ], 00:16:34.669 "product_name": "passthru", 00:16:34.669 "block_size": 512, 00:16:34.669 "num_blocks": 65536, 00:16:34.669 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:34.669 "assigned_rate_limits": { 00:16:34.669 "rw_ios_per_sec": 0, 00:16:34.669 "rw_mbytes_per_sec": 0, 00:16:34.669 "r_mbytes_per_sec": 0, 00:16:34.669 "w_mbytes_per_sec": 0 00:16:34.669 }, 00:16:34.669 "claimed": true, 00:16:34.669 "claim_type": "exclusive_write", 00:16:34.669 "zoned": false, 00:16:34.669 "supported_io_types": { 00:16:34.669 "read": true, 00:16:34.669 "write": true, 00:16:34.669 "unmap": true, 00:16:34.669 "write_zeroes": true, 00:16:34.669 "flush": true, 00:16:34.669 "reset": true, 00:16:34.669 "compare": false, 00:16:34.669 "compare_and_write": false, 00:16:34.669 "abort": true, 00:16:34.669 "nvme_admin": false, 00:16:34.669 "nvme_io": false 00:16:34.669 }, 00:16:34.669 "memory_domains": [ 00:16:34.669 { 00:16:34.669 "dma_device_id": "system", 00:16:34.669 "dma_device_type": 1 00:16:34.669 }, 00:16:34.669 { 00:16:34.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.669 "dma_device_type": 2 00:16:34.669 } 00:16:34.669 ], 00:16:34.669 "driver_specific": { 00:16:34.669 "passthru": { 00:16:34.669 "name": "pt4", 00:16:34.669 "base_bdev_name": "malloc4" 00:16:34.669 } 00:16:34.669 } 00:16:34.669 }' 00:16:34.669 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.670 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:34.927 03:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:35.184 [2024-05-15 03:11:06.208020] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.184 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27c6502f-166f-4c33-bd04-8d5975e779e3 00:16:35.184 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27c6502f-166f-4c33-bd04-8d5975e779e3 ']' 00:16:35.184 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:35.442 [2024-05-15 03:11:06.452360] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.442 [2024-05-15 03:11:06.452380] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.442 [2024-05-15 03:11:06.452426] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.442 [2024-05-15 03:11:06.452488] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.442 [2024-05-15 03:11:06.452497] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24d4900 name raid_bdev1, state offline 00:16:35.442 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.442 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:35.700 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:35.700 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:35.700 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.700 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:35.957 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.957 03:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:36.215 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.215 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:36.473 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.473 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:36.730 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:36.730 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:36.988 03:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:36.988 [2024-05-15 03:11:08.132785] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:36.988 [2024-05-15 03:11:08.134204] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:36.988 [2024-05-15 03:11:08.134248] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:36.988 [2024-05-15 03:11:08.134283] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:36.988 [2024-05-15 03:11:08.134325] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:36.988 [2024-05-15 03:11:08.134361] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:36.988 [2024-05-15 03:11:08.134381] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:36.988 [2024-05-15 03:11:08.134400] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:36.988 [2024-05-15 03:11:08.134414] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.988 [2024-05-15 03:11:08.134422] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24dbc30 name raid_bdev1, state configuring 00:16:36.988 request: 00:16:36.988 { 00:16:36.988 "name": "raid_bdev1", 00:16:36.988 "raid_level": "raid0", 00:16:36.988 "base_bdevs": [ 00:16:36.988 "malloc1", 00:16:36.988 "malloc2", 00:16:36.988 "malloc3", 00:16:36.988 "malloc4" 00:16:36.988 ], 00:16:36.988 "superblock": false, 00:16:36.988 "strip_size_kb": 64, 00:16:36.988 "method": "bdev_raid_create", 00:16:36.988 "req_id": 1 00:16:36.988 } 00:16:36.988 Got JSON-RPC error response 00:16:36.988 response: 00:16:36.988 { 00:16:36.988 "code": -17, 00:16:36.988 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:36.988 } 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.246 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:37.504 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:37.504 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:37.504 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.504 [2024-05-15 03:11:08.650094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.504 [2024-05-15 03:11:08.650129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.504 [2024-05-15 03:11:08.650145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d5040 00:16:37.504 [2024-05-15 03:11:08.650154] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.504 [2024-05-15 03:11:08.651812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.504 [2024-05-15 03:11:08.651838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.504 [2024-05-15 03:11:08.651905] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:37.504 [2024-05-15 03:11:08.651930] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.504 pt1 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.761 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.020 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.020 "name": "raid_bdev1", 00:16:38.020 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:38.020 "strip_size_kb": 64, 00:16:38.020 "state": "configuring", 00:16:38.020 "raid_level": "raid0", 00:16:38.020 "superblock": true, 00:16:38.020 "num_base_bdevs": 4, 00:16:38.020 "num_base_bdevs_discovered": 1, 00:16:38.020 "num_base_bdevs_operational": 4, 00:16:38.020 "base_bdevs_list": [ 00:16:38.020 { 00:16:38.020 "name": "pt1", 00:16:38.020 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:38.020 "is_configured": true, 00:16:38.020 "data_offset": 2048, 00:16:38.020 "data_size": 63488 00:16:38.020 }, 00:16:38.020 { 00:16:38.020 "name": null, 00:16:38.020 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:38.020 "is_configured": false, 00:16:38.020 "data_offset": 2048, 00:16:38.020 "data_size": 63488 00:16:38.020 }, 00:16:38.020 { 00:16:38.020 "name": null, 00:16:38.020 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:38.020 "is_configured": false, 00:16:38.020 "data_offset": 2048, 00:16:38.020 "data_size": 63488 00:16:38.020 }, 00:16:38.020 { 00:16:38.020 "name": null, 00:16:38.020 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:38.020 "is_configured": false, 00:16:38.020 "data_offset": 2048, 00:16:38.020 "data_size": 63488 00:16:38.020 } 00:16:38.020 ] 00:16:38.020 }' 00:16:38.020 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.020 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.586 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:38.586 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.849 [2024-05-15 03:11:09.773105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.849 [2024-05-15 03:11:09.773149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.849 [2024-05-15 03:11:09.773167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d4dd0 00:16:38.849 [2024-05-15 03:11:09.773176] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.849 [2024-05-15 03:11:09.773517] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.849 [2024-05-15 03:11:09.773533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.849 [2024-05-15 03:11:09.773591] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:38.849 [2024-05-15 03:11:09.773608] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.849 pt2 00:16:38.849 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:39.107 [2024-05-15 03:11:10.013803] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.107 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.366 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.366 "name": "raid_bdev1", 00:16:39.366 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:39.366 "strip_size_kb": 64, 00:16:39.366 "state": "configuring", 00:16:39.366 "raid_level": "raid0", 00:16:39.366 "superblock": true, 00:16:39.366 "num_base_bdevs": 4, 00:16:39.366 "num_base_bdevs_discovered": 1, 00:16:39.366 "num_base_bdevs_operational": 4, 00:16:39.366 "base_bdevs_list": [ 00:16:39.366 { 00:16:39.366 "name": "pt1", 00:16:39.366 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:39.366 "is_configured": true, 00:16:39.366 "data_offset": 2048, 00:16:39.366 "data_size": 63488 00:16:39.366 }, 00:16:39.366 { 00:16:39.366 "name": null, 00:16:39.366 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:39.366 "is_configured": false, 00:16:39.366 "data_offset": 2048, 00:16:39.366 "data_size": 63488 00:16:39.366 }, 00:16:39.366 { 00:16:39.366 "name": null, 00:16:39.366 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:39.366 "is_configured": false, 00:16:39.366 "data_offset": 2048, 00:16:39.366 "data_size": 63488 00:16:39.366 }, 00:16:39.366 { 00:16:39.366 "name": null, 00:16:39.366 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:39.366 "is_configured": false, 00:16:39.366 "data_offset": 2048, 00:16:39.366 "data_size": 63488 00:16:39.366 } 00:16:39.366 ] 00:16:39.366 }' 00:16:39.366 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.366 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.932 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:39.932 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.932 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.190 [2024-05-15 03:11:11.136928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.190 [2024-05-15 03:11:11.136978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.190 [2024-05-15 03:11:11.136996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26848b0 00:16:40.190 [2024-05-15 03:11:11.137006] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.190 [2024-05-15 03:11:11.137342] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.190 [2024-05-15 03:11:11.137356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.190 [2024-05-15 03:11:11.137415] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:40.190 [2024-05-15 03:11:11.137432] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.190 pt2 00:16:40.190 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.190 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.190 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.448 [2024-05-15 03:11:11.393610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.448 [2024-05-15 03:11:11.393639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.448 [2024-05-15 03:11:11.393658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x268d0c0 00:16:40.448 [2024-05-15 03:11:11.393667] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.448 [2024-05-15 03:11:11.393972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.448 [2024-05-15 03:11:11.393987] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.448 [2024-05-15 03:11:11.394037] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:40.448 [2024-05-15 03:11:11.394054] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.448 pt3 00:16:40.448 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.448 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.448 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:40.706 [2024-05-15 03:11:11.650299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:40.706 [2024-05-15 03:11:11.650329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.706 [2024-05-15 03:11:11.650343] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d4470 00:16:40.706 [2024-05-15 03:11:11.650352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.706 [2024-05-15 03:11:11.650656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.706 [2024-05-15 03:11:11.650671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:40.706 [2024-05-15 03:11:11.650720] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:40.706 [2024-05-15 03:11:11.650737] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:40.706 [2024-05-15 03:11:11.650872] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x24d54e0 00:16:40.706 [2024-05-15 03:11:11.650882] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:40.706 [2024-05-15 03:11:11.651063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26807e0 00:16:40.706 [2024-05-15 03:11:11.651197] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24d54e0 00:16:40.706 [2024-05-15 03:11:11.651205] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24d54e0 00:16:40.706 [2024-05-15 03:11:11.651299] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.706 pt4 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.706 "name": "raid_bdev1", 00:16:40.706 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:40.706 "strip_size_kb": 64, 00:16:40.706 "state": "online", 00:16:40.706 "raid_level": "raid0", 00:16:40.706 "superblock": true, 00:16:40.706 "num_base_bdevs": 4, 00:16:40.706 "num_base_bdevs_discovered": 4, 00:16:40.706 "num_base_bdevs_operational": 4, 00:16:40.706 "base_bdevs_list": [ 00:16:40.706 { 00:16:40.706 "name": "pt1", 00:16:40.706 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:40.706 "is_configured": true, 00:16:40.706 "data_offset": 2048, 00:16:40.706 "data_size": 63488 00:16:40.706 }, 00:16:40.706 { 00:16:40.706 "name": "pt2", 00:16:40.706 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:40.706 "is_configured": true, 00:16:40.706 "data_offset": 2048, 00:16:40.706 "data_size": 63488 00:16:40.706 }, 00:16:40.706 { 00:16:40.706 "name": "pt3", 00:16:40.706 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:40.706 "is_configured": true, 00:16:40.706 "data_offset": 2048, 00:16:40.706 "data_size": 63488 00:16:40.706 }, 00:16:40.706 { 00:16:40.706 "name": "pt4", 00:16:40.706 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:40.706 "is_configured": true, 00:16:40.706 "data_offset": 2048, 00:16:40.706 "data_size": 63488 00:16:40.706 } 00:16:40.706 ] 00:16:40.706 }' 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.706 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:41.639 [2024-05-15 03:11:12.717465] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.639 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:41.639 "name": "raid_bdev1", 00:16:41.639 "aliases": [ 00:16:41.639 "27c6502f-166f-4c33-bd04-8d5975e779e3" 00:16:41.639 ], 00:16:41.639 "product_name": "Raid Volume", 00:16:41.639 "block_size": 512, 00:16:41.639 "num_blocks": 253952, 00:16:41.639 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:41.639 "assigned_rate_limits": { 00:16:41.639 "rw_ios_per_sec": 0, 00:16:41.639 "rw_mbytes_per_sec": 0, 00:16:41.639 "r_mbytes_per_sec": 0, 00:16:41.639 "w_mbytes_per_sec": 0 00:16:41.639 }, 00:16:41.639 "claimed": false, 00:16:41.639 "zoned": false, 00:16:41.639 "supported_io_types": { 00:16:41.639 "read": true, 00:16:41.639 "write": true, 00:16:41.639 "unmap": true, 00:16:41.639 "write_zeroes": true, 00:16:41.639 "flush": true, 00:16:41.639 "reset": true, 00:16:41.639 "compare": false, 00:16:41.639 "compare_and_write": false, 00:16:41.639 "abort": false, 00:16:41.639 "nvme_admin": false, 00:16:41.639 "nvme_io": false 00:16:41.639 }, 00:16:41.639 "memory_domains": [ 00:16:41.639 { 00:16:41.639 "dma_device_id": "system", 00:16:41.639 "dma_device_type": 1 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.639 "dma_device_type": 2 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "system", 00:16:41.639 "dma_device_type": 1 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.639 "dma_device_type": 2 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "system", 00:16:41.639 "dma_device_type": 1 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.639 "dma_device_type": 2 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "system", 00:16:41.639 "dma_device_type": 1 00:16:41.639 }, 00:16:41.639 { 00:16:41.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.639 "dma_device_type": 2 00:16:41.639 } 00:16:41.639 ], 00:16:41.639 "driver_specific": { 00:16:41.639 "raid": { 00:16:41.639 "uuid": "27c6502f-166f-4c33-bd04-8d5975e779e3", 00:16:41.640 "strip_size_kb": 64, 00:16:41.640 "state": "online", 00:16:41.640 "raid_level": "raid0", 00:16:41.640 "superblock": true, 00:16:41.640 "num_base_bdevs": 4, 00:16:41.640 "num_base_bdevs_discovered": 4, 00:16:41.640 "num_base_bdevs_operational": 4, 00:16:41.640 "base_bdevs_list": [ 00:16:41.640 { 00:16:41.640 "name": "pt1", 00:16:41.640 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:41.640 "is_configured": true, 00:16:41.640 "data_offset": 2048, 00:16:41.640 "data_size": 63488 00:16:41.640 }, 00:16:41.640 { 00:16:41.640 "name": "pt2", 00:16:41.640 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:41.640 "is_configured": true, 00:16:41.640 "data_offset": 2048, 00:16:41.640 "data_size": 63488 00:16:41.640 }, 00:16:41.640 { 00:16:41.640 "name": "pt3", 00:16:41.640 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:41.640 "is_configured": true, 00:16:41.640 "data_offset": 2048, 00:16:41.640 "data_size": 63488 00:16:41.640 }, 00:16:41.640 { 00:16:41.640 "name": "pt4", 00:16:41.640 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:41.640 "is_configured": true, 00:16:41.640 "data_offset": 2048, 00:16:41.640 "data_size": 63488 00:16:41.640 } 00:16:41.640 ] 00:16:41.640 } 00:16:41.640 } 00:16:41.640 }' 00:16:41.640 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.640 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:41.640 pt2 00:16:41.640 pt3 00:16:41.640 pt4' 00:16:41.640 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:41.640 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:41.640 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:41.898 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:41.898 "name": "pt1", 00:16:41.898 "aliases": [ 00:16:41.898 "3387d292-77d1-5f30-9773-7fa2c090fa36" 00:16:41.898 ], 00:16:41.898 "product_name": "passthru", 00:16:41.898 "block_size": 512, 00:16:41.898 "num_blocks": 65536, 00:16:41.898 "uuid": "3387d292-77d1-5f30-9773-7fa2c090fa36", 00:16:41.898 "assigned_rate_limits": { 00:16:41.898 "rw_ios_per_sec": 0, 00:16:41.898 "rw_mbytes_per_sec": 0, 00:16:41.898 "r_mbytes_per_sec": 0, 00:16:41.898 "w_mbytes_per_sec": 0 00:16:41.898 }, 00:16:41.898 "claimed": true, 00:16:41.898 "claim_type": "exclusive_write", 00:16:41.898 "zoned": false, 00:16:41.898 "supported_io_types": { 00:16:41.898 "read": true, 00:16:41.898 "write": true, 00:16:41.898 "unmap": true, 00:16:41.898 "write_zeroes": true, 00:16:41.898 "flush": true, 00:16:41.898 "reset": true, 00:16:41.898 "compare": false, 00:16:41.898 "compare_and_write": false, 00:16:41.898 "abort": true, 00:16:41.898 "nvme_admin": false, 00:16:41.898 "nvme_io": false 00:16:41.898 }, 00:16:41.898 "memory_domains": [ 00:16:41.898 { 00:16:41.898 "dma_device_id": "system", 00:16:41.898 "dma_device_type": 1 00:16:41.898 }, 00:16:41.898 { 00:16:41.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.898 "dma_device_type": 2 00:16:41.898 } 00:16:41.898 ], 00:16:41.898 "driver_specific": { 00:16:41.898 "passthru": { 00:16:41.898 "name": "pt1", 00:16:41.898 "base_bdev_name": "malloc1" 00:16:41.898 } 00:16:41.898 } 00:16:41.898 }' 00:16:41.898 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.155 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:42.412 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:42.671 "name": "pt2", 00:16:42.671 "aliases": [ 00:16:42.671 "58a65cf6-1c6b-520c-9784-499b73022586" 00:16:42.671 ], 00:16:42.671 "product_name": "passthru", 00:16:42.671 "block_size": 512, 00:16:42.671 "num_blocks": 65536, 00:16:42.671 "uuid": "58a65cf6-1c6b-520c-9784-499b73022586", 00:16:42.671 "assigned_rate_limits": { 00:16:42.671 "rw_ios_per_sec": 0, 00:16:42.671 "rw_mbytes_per_sec": 0, 00:16:42.671 "r_mbytes_per_sec": 0, 00:16:42.671 "w_mbytes_per_sec": 0 00:16:42.671 }, 00:16:42.671 "claimed": true, 00:16:42.671 "claim_type": "exclusive_write", 00:16:42.671 "zoned": false, 00:16:42.671 "supported_io_types": { 00:16:42.671 "read": true, 00:16:42.671 "write": true, 00:16:42.671 "unmap": true, 00:16:42.671 "write_zeroes": true, 00:16:42.671 "flush": true, 00:16:42.671 "reset": true, 00:16:42.671 "compare": false, 00:16:42.671 "compare_and_write": false, 00:16:42.671 "abort": true, 00:16:42.671 "nvme_admin": false, 00:16:42.671 "nvme_io": false 00:16:42.671 }, 00:16:42.671 "memory_domains": [ 00:16:42.671 { 00:16:42.671 "dma_device_id": "system", 00:16:42.671 "dma_device_type": 1 00:16:42.671 }, 00:16:42.671 { 00:16:42.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.671 "dma_device_type": 2 00:16:42.671 } 00:16:42.671 ], 00:16:42.671 "driver_specific": { 00:16:42.671 "passthru": { 00:16:42.671 "name": "pt2", 00:16:42.671 "base_bdev_name": "malloc2" 00:16:42.671 } 00:16:42.671 } 00:16:42.671 }' 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.671 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.929 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.929 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:42.929 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:42.929 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:42.929 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:43.187 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:43.187 "name": "pt3", 00:16:43.187 "aliases": [ 00:16:43.187 "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d" 00:16:43.187 ], 00:16:43.187 "product_name": "passthru", 00:16:43.187 "block_size": 512, 00:16:43.187 "num_blocks": 65536, 00:16:43.187 "uuid": "ef2cb0bd-a58b-59ac-b571-a1b1ae80691d", 00:16:43.187 "assigned_rate_limits": { 00:16:43.187 "rw_ios_per_sec": 0, 00:16:43.187 "rw_mbytes_per_sec": 0, 00:16:43.187 "r_mbytes_per_sec": 0, 00:16:43.187 "w_mbytes_per_sec": 0 00:16:43.187 }, 00:16:43.187 "claimed": true, 00:16:43.187 "claim_type": "exclusive_write", 00:16:43.187 "zoned": false, 00:16:43.187 "supported_io_types": { 00:16:43.187 "read": true, 00:16:43.187 "write": true, 00:16:43.187 "unmap": true, 00:16:43.187 "write_zeroes": true, 00:16:43.187 "flush": true, 00:16:43.187 "reset": true, 00:16:43.187 "compare": false, 00:16:43.187 "compare_and_write": false, 00:16:43.187 "abort": true, 00:16:43.187 "nvme_admin": false, 00:16:43.187 "nvme_io": false 00:16:43.187 }, 00:16:43.187 "memory_domains": [ 00:16:43.187 { 00:16:43.187 "dma_device_id": "system", 00:16:43.187 "dma_device_type": 1 00:16:43.188 }, 00:16:43.188 { 00:16:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.188 "dma_device_type": 2 00:16:43.188 } 00:16:43.188 ], 00:16:43.188 "driver_specific": { 00:16:43.188 "passthru": { 00:16:43.188 "name": "pt3", 00:16:43.188 "base_bdev_name": "malloc3" 00:16:43.188 } 00:16:43.188 } 00:16:43.188 }' 00:16:43.188 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:43.188 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.445 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:43.702 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:43.702 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:43.702 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:43.702 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:43.702 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:43.959 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:43.959 "name": "pt4", 00:16:43.959 "aliases": [ 00:16:43.959 "8f06579c-74d0-5c48-a098-bb957248c8b3" 00:16:43.959 ], 00:16:43.959 "product_name": "passthru", 00:16:43.959 "block_size": 512, 00:16:43.959 "num_blocks": 65536, 00:16:43.959 "uuid": "8f06579c-74d0-5c48-a098-bb957248c8b3", 00:16:43.959 "assigned_rate_limits": { 00:16:43.959 "rw_ios_per_sec": 0, 00:16:43.959 "rw_mbytes_per_sec": 0, 00:16:43.959 "r_mbytes_per_sec": 0, 00:16:43.959 "w_mbytes_per_sec": 0 00:16:43.959 }, 00:16:43.959 "claimed": true, 00:16:43.959 "claim_type": "exclusive_write", 00:16:43.959 "zoned": false, 00:16:43.959 "supported_io_types": { 00:16:43.959 "read": true, 00:16:43.959 "write": true, 00:16:43.959 "unmap": true, 00:16:43.959 "write_zeroes": true, 00:16:43.959 "flush": true, 00:16:43.959 "reset": true, 00:16:43.959 "compare": false, 00:16:43.959 "compare_and_write": false, 00:16:43.959 "abort": true, 00:16:43.959 "nvme_admin": false, 00:16:43.959 "nvme_io": false 00:16:43.959 }, 00:16:43.959 "memory_domains": [ 00:16:43.959 { 00:16:43.959 "dma_device_id": "system", 00:16:43.959 "dma_device_type": 1 00:16:43.959 }, 00:16:43.959 { 00:16:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.959 "dma_device_type": 2 00:16:43.959 } 00:16:43.959 ], 00:16:43.959 "driver_specific": { 00:16:43.959 "passthru": { 00:16:43.959 "name": "pt4", 00:16:43.959 "base_bdev_name": "malloc4" 00:16:43.959 } 00:16:43.959 } 00:16:43.959 }' 00:16:43.959 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:43.959 03:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:43.960 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:43.960 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:43.960 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:43.960 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.960 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:44.217 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:44.475 [2024-05-15 03:11:15.520982] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27c6502f-166f-4c33-bd04-8d5975e779e3 '!=' 27c6502f-166f-4c33-bd04-8d5975e779e3 ']' 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4114900 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4114900 ']' 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4114900 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4114900 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4114900' 00:16:44.475 killing process with pid 4114900 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4114900 00:16:44.475 [2024-05-15 03:11:15.589200] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.475 [2024-05-15 03:11:15.589261] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.475 [2024-05-15 03:11:15.589325] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.475 [2024-05-15 03:11:15.589334] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24d54e0 name raid_bdev1, state offline 00:16:44.475 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4114900 00:16:44.475 [2024-05-15 03:11:15.623861] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.733 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:16:44.733 00:16:44.733 real 0m16.867s 00:16:44.733 user 0m31.145s 00:16:44.733 sys 0m2.350s 00:16:44.733 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.733 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.733 ************************************ 00:16:44.733 END TEST raid_superblock_test 00:16:44.733 ************************************ 00:16:44.733 03:11:15 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:16:44.733 03:11:15 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:44.733 03:11:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:44.733 03:11:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.733 03:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.990 ************************************ 00:16:44.990 START TEST raid_state_function_test 00:16:44.990 ************************************ 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4118052 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4118052' 00:16:44.991 Process raid pid: 4118052 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4118052 /var/tmp/spdk-raid.sock 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4118052 ']' 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:44.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:44.991 03:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.991 [2024-05-15 03:11:15.990333] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:44.991 [2024-05-15 03:11:15.990385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.991 [2024-05-15 03:11:16.089405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.248 [2024-05-15 03:11:16.184361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.248 [2024-05-15 03:11:16.243194] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.248 [2024-05-15 03:11:16.243227] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.812 03:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:45.812 03:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:45.812 03:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:46.069 [2024-05-15 03:11:17.175185] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.069 [2024-05-15 03:11:17.175225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.069 [2024-05-15 03:11:17.175234] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.069 [2024-05-15 03:11:17.175243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.069 [2024-05-15 03:11:17.175250] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.069 [2024-05-15 03:11:17.175258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.069 [2024-05-15 03:11:17.175265] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.069 [2024-05-15 03:11:17.175277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.069 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:46.069 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.070 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.328 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.328 "name": "Existed_Raid", 00:16:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.328 "strip_size_kb": 64, 00:16:46.328 "state": "configuring", 00:16:46.328 "raid_level": "concat", 00:16:46.328 "superblock": false, 00:16:46.328 "num_base_bdevs": 4, 00:16:46.328 "num_base_bdevs_discovered": 0, 00:16:46.328 "num_base_bdevs_operational": 4, 00:16:46.328 "base_bdevs_list": [ 00:16:46.328 { 00:16:46.328 "name": "BaseBdev1", 00:16:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.328 "is_configured": false, 00:16:46.328 "data_offset": 0, 00:16:46.328 "data_size": 0 00:16:46.328 }, 00:16:46.328 { 00:16:46.328 "name": "BaseBdev2", 00:16:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.328 "is_configured": false, 00:16:46.328 "data_offset": 0, 00:16:46.328 "data_size": 0 00:16:46.328 }, 00:16:46.328 { 00:16:46.328 "name": "BaseBdev3", 00:16:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.328 "is_configured": false, 00:16:46.328 "data_offset": 0, 00:16:46.328 "data_size": 0 00:16:46.328 }, 00:16:46.328 { 00:16:46.328 "name": "BaseBdev4", 00:16:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.328 "is_configured": false, 00:16:46.328 "data_offset": 0, 00:16:46.328 "data_size": 0 00:16:46.328 } 00:16:46.328 ] 00:16:46.328 }' 00:16:46.328 03:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.328 03:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.285 03:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.285 [2024-05-15 03:11:18.290019] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.285 [2024-05-15 03:11:18.290048] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1576e00 name Existed_Raid, state configuring 00:16:47.285 03:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:47.553 [2024-05-15 03:11:18.538694] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.553 [2024-05-15 03:11:18.538718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.553 [2024-05-15 03:11:18.538726] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.553 [2024-05-15 03:11:18.538735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.553 [2024-05-15 03:11:18.538742] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.553 [2024-05-15 03:11:18.538755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.553 [2024-05-15 03:11:18.538762] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.553 [2024-05-15 03:11:18.538770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.553 03:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.810 [2024-05-15 03:11:18.804863] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.810 BaseBdev1 00:16:47.810 03:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:47.810 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:47.810 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:47.810 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:47.810 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:47.811 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:47.811 03:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.068 03:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.327 [ 00:16:48.327 { 00:16:48.327 "name": "BaseBdev1", 00:16:48.327 "aliases": [ 00:16:48.327 "de24827b-b9b6-47e2-aaa2-538bb5aff6df" 00:16:48.327 ], 00:16:48.327 "product_name": "Malloc disk", 00:16:48.327 "block_size": 512, 00:16:48.327 "num_blocks": 65536, 00:16:48.327 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:48.327 "assigned_rate_limits": { 00:16:48.327 "rw_ios_per_sec": 0, 00:16:48.327 "rw_mbytes_per_sec": 0, 00:16:48.327 "r_mbytes_per_sec": 0, 00:16:48.327 "w_mbytes_per_sec": 0 00:16:48.327 }, 00:16:48.327 "claimed": true, 00:16:48.327 "claim_type": "exclusive_write", 00:16:48.327 "zoned": false, 00:16:48.327 "supported_io_types": { 00:16:48.327 "read": true, 00:16:48.327 "write": true, 00:16:48.327 "unmap": true, 00:16:48.327 "write_zeroes": true, 00:16:48.327 "flush": true, 00:16:48.327 "reset": true, 00:16:48.327 "compare": false, 00:16:48.327 "compare_and_write": false, 00:16:48.327 "abort": true, 00:16:48.327 "nvme_admin": false, 00:16:48.327 "nvme_io": false 00:16:48.327 }, 00:16:48.327 "memory_domains": [ 00:16:48.327 { 00:16:48.327 "dma_device_id": "system", 00:16:48.327 "dma_device_type": 1 00:16:48.327 }, 00:16:48.327 { 00:16:48.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.327 "dma_device_type": 2 00:16:48.327 } 00:16:48.327 ], 00:16:48.327 "driver_specific": {} 00:16:48.327 } 00:16:48.327 ] 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.327 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.585 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.585 "name": "Existed_Raid", 00:16:48.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.585 "strip_size_kb": 64, 00:16:48.585 "state": "configuring", 00:16:48.585 "raid_level": "concat", 00:16:48.585 "superblock": false, 00:16:48.585 "num_base_bdevs": 4, 00:16:48.585 "num_base_bdevs_discovered": 1, 00:16:48.585 "num_base_bdevs_operational": 4, 00:16:48.585 "base_bdevs_list": [ 00:16:48.585 { 00:16:48.585 "name": "BaseBdev1", 00:16:48.585 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:48.585 "is_configured": true, 00:16:48.585 "data_offset": 0, 00:16:48.585 "data_size": 65536 00:16:48.585 }, 00:16:48.585 { 00:16:48.585 "name": "BaseBdev2", 00:16:48.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.585 "is_configured": false, 00:16:48.585 "data_offset": 0, 00:16:48.585 "data_size": 0 00:16:48.585 }, 00:16:48.585 { 00:16:48.585 "name": "BaseBdev3", 00:16:48.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.585 "is_configured": false, 00:16:48.585 "data_offset": 0, 00:16:48.585 "data_size": 0 00:16:48.585 }, 00:16:48.585 { 00:16:48.585 "name": "BaseBdev4", 00:16:48.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.585 "is_configured": false, 00:16:48.585 "data_offset": 0, 00:16:48.585 "data_size": 0 00:16:48.585 } 00:16:48.585 ] 00:16:48.585 }' 00:16:48.585 03:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.585 03:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.148 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.405 [2024-05-15 03:11:20.425185] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.405 [2024-05-15 03:11:20.425223] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15770a0 name Existed_Raid, state configuring 00:16:49.405 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:49.662 [2024-05-15 03:11:20.677890] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.662 [2024-05-15 03:11:20.679392] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.662 [2024-05-15 03:11:20.679421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.662 [2024-05-15 03:11:20.679431] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.662 [2024-05-15 03:11:20.679439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.662 [2024-05-15 03:11:20.679446] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.662 [2024-05-15 03:11:20.679455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.662 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.920 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.920 "name": "Existed_Raid", 00:16:49.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.920 "strip_size_kb": 64, 00:16:49.920 "state": "configuring", 00:16:49.920 "raid_level": "concat", 00:16:49.920 "superblock": false, 00:16:49.920 "num_base_bdevs": 4, 00:16:49.920 "num_base_bdevs_discovered": 1, 00:16:49.920 "num_base_bdevs_operational": 4, 00:16:49.920 "base_bdevs_list": [ 00:16:49.920 { 00:16:49.920 "name": "BaseBdev1", 00:16:49.920 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:49.920 "is_configured": true, 00:16:49.920 "data_offset": 0, 00:16:49.920 "data_size": 65536 00:16:49.920 }, 00:16:49.920 { 00:16:49.920 "name": "BaseBdev2", 00:16:49.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.920 "is_configured": false, 00:16:49.921 "data_offset": 0, 00:16:49.921 "data_size": 0 00:16:49.921 }, 00:16:49.921 { 00:16:49.921 "name": "BaseBdev3", 00:16:49.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.921 "is_configured": false, 00:16:49.921 "data_offset": 0, 00:16:49.921 "data_size": 0 00:16:49.921 }, 00:16:49.921 { 00:16:49.921 "name": "BaseBdev4", 00:16:49.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.921 "is_configured": false, 00:16:49.921 "data_offset": 0, 00:16:49.921 "data_size": 0 00:16:49.921 } 00:16:49.921 ] 00:16:49.921 }' 00:16:49.921 03:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.921 03:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.485 03:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.742 [2024-05-15 03:11:21.816156] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.742 BaseBdev2 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:50.742 03:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.999 03:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.256 [ 00:16:51.256 { 00:16:51.256 "name": "BaseBdev2", 00:16:51.256 "aliases": [ 00:16:51.256 "a67f7b42-91f5-42cf-9e53-468bf5953128" 00:16:51.257 ], 00:16:51.257 "product_name": "Malloc disk", 00:16:51.257 "block_size": 512, 00:16:51.257 "num_blocks": 65536, 00:16:51.257 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:51.257 "assigned_rate_limits": { 00:16:51.257 "rw_ios_per_sec": 0, 00:16:51.257 "rw_mbytes_per_sec": 0, 00:16:51.257 "r_mbytes_per_sec": 0, 00:16:51.257 "w_mbytes_per_sec": 0 00:16:51.257 }, 00:16:51.257 "claimed": true, 00:16:51.257 "claim_type": "exclusive_write", 00:16:51.257 "zoned": false, 00:16:51.257 "supported_io_types": { 00:16:51.257 "read": true, 00:16:51.257 "write": true, 00:16:51.257 "unmap": true, 00:16:51.257 "write_zeroes": true, 00:16:51.257 "flush": true, 00:16:51.257 "reset": true, 00:16:51.257 "compare": false, 00:16:51.257 "compare_and_write": false, 00:16:51.257 "abort": true, 00:16:51.257 "nvme_admin": false, 00:16:51.257 "nvme_io": false 00:16:51.257 }, 00:16:51.257 "memory_domains": [ 00:16:51.257 { 00:16:51.257 "dma_device_id": "system", 00:16:51.257 "dma_device_type": 1 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.257 "dma_device_type": 2 00:16:51.257 } 00:16:51.257 ], 00:16:51.257 "driver_specific": {} 00:16:51.257 } 00:16:51.257 ] 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.257 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.514 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.514 "name": "Existed_Raid", 00:16:51.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.514 "strip_size_kb": 64, 00:16:51.514 "state": "configuring", 00:16:51.514 "raid_level": "concat", 00:16:51.514 "superblock": false, 00:16:51.514 "num_base_bdevs": 4, 00:16:51.514 "num_base_bdevs_discovered": 2, 00:16:51.514 "num_base_bdevs_operational": 4, 00:16:51.514 "base_bdevs_list": [ 00:16:51.514 { 00:16:51.514 "name": "BaseBdev1", 00:16:51.514 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:51.514 "is_configured": true, 00:16:51.514 "data_offset": 0, 00:16:51.514 "data_size": 65536 00:16:51.514 }, 00:16:51.514 { 00:16:51.514 "name": "BaseBdev2", 00:16:51.514 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:51.514 "is_configured": true, 00:16:51.514 "data_offset": 0, 00:16:51.514 "data_size": 65536 00:16:51.514 }, 00:16:51.514 { 00:16:51.514 "name": "BaseBdev3", 00:16:51.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.514 "is_configured": false, 00:16:51.514 "data_offset": 0, 00:16:51.514 "data_size": 0 00:16:51.514 }, 00:16:51.514 { 00:16:51.514 "name": "BaseBdev4", 00:16:51.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.514 "is_configured": false, 00:16:51.514 "data_offset": 0, 00:16:51.514 "data_size": 0 00:16:51.514 } 00:16:51.514 ] 00:16:51.514 }' 00:16:51.514 03:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.514 03:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.079 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.336 [2024-05-15 03:11:23.459775] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.336 BaseBdev3 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:52.336 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.594 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.851 [ 00:16:52.851 { 00:16:52.851 "name": "BaseBdev3", 00:16:52.851 "aliases": [ 00:16:52.851 "8073c668-1110-4694-9e25-ba1dd1a8f9d9" 00:16:52.851 ], 00:16:52.851 "product_name": "Malloc disk", 00:16:52.851 "block_size": 512, 00:16:52.851 "num_blocks": 65536, 00:16:52.851 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:52.851 "assigned_rate_limits": { 00:16:52.851 "rw_ios_per_sec": 0, 00:16:52.851 "rw_mbytes_per_sec": 0, 00:16:52.851 "r_mbytes_per_sec": 0, 00:16:52.851 "w_mbytes_per_sec": 0 00:16:52.851 }, 00:16:52.851 "claimed": true, 00:16:52.851 "claim_type": "exclusive_write", 00:16:52.851 "zoned": false, 00:16:52.851 "supported_io_types": { 00:16:52.851 "read": true, 00:16:52.851 "write": true, 00:16:52.851 "unmap": true, 00:16:52.851 "write_zeroes": true, 00:16:52.851 "flush": true, 00:16:52.851 "reset": true, 00:16:52.851 "compare": false, 00:16:52.851 "compare_and_write": false, 00:16:52.851 "abort": true, 00:16:52.851 "nvme_admin": false, 00:16:52.851 "nvme_io": false 00:16:52.851 }, 00:16:52.851 "memory_domains": [ 00:16:52.851 { 00:16:52.851 "dma_device_id": "system", 00:16:52.851 "dma_device_type": 1 00:16:52.851 }, 00:16:52.851 { 00:16:52.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.851 "dma_device_type": 2 00:16:52.851 } 00:16:52.851 ], 00:16:52.851 "driver_specific": {} 00:16:52.851 } 00:16:52.851 ] 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.851 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.108 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.108 "name": "Existed_Raid", 00:16:53.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.108 "strip_size_kb": 64, 00:16:53.108 "state": "configuring", 00:16:53.108 "raid_level": "concat", 00:16:53.108 "superblock": false, 00:16:53.108 "num_base_bdevs": 4, 00:16:53.108 "num_base_bdevs_discovered": 3, 00:16:53.108 "num_base_bdevs_operational": 4, 00:16:53.108 "base_bdevs_list": [ 00:16:53.108 { 00:16:53.108 "name": "BaseBdev1", 00:16:53.108 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:53.108 "is_configured": true, 00:16:53.108 "data_offset": 0, 00:16:53.108 "data_size": 65536 00:16:53.108 }, 00:16:53.108 { 00:16:53.108 "name": "BaseBdev2", 00:16:53.108 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:53.108 "is_configured": true, 00:16:53.108 "data_offset": 0, 00:16:53.108 "data_size": 65536 00:16:53.108 }, 00:16:53.108 { 00:16:53.108 "name": "BaseBdev3", 00:16:53.108 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:53.108 "is_configured": true, 00:16:53.108 "data_offset": 0, 00:16:53.108 "data_size": 65536 00:16:53.108 }, 00:16:53.108 { 00:16:53.108 "name": "BaseBdev4", 00:16:53.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.108 "is_configured": false, 00:16:53.108 "data_offset": 0, 00:16:53.108 "data_size": 0 00:16:53.108 } 00:16:53.108 ] 00:16:53.108 }' 00:16:53.108 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.108 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.039 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:54.039 [2024-05-15 03:11:25.103487] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.039 [2024-05-15 03:11:25.103520] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1576670 00:16:54.039 [2024-05-15 03:11:25.103531] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:54.039 [2024-05-15 03:11:25.103725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1578a30 00:16:54.039 [2024-05-15 03:11:25.103862] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1576670 00:16:54.039 [2024-05-15 03:11:25.103871] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1576670 00:16:54.039 [2024-05-15 03:11:25.104034] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.039 BaseBdev4 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:54.039 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.296 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:54.553 [ 00:16:54.553 { 00:16:54.553 "name": "BaseBdev4", 00:16:54.553 "aliases": [ 00:16:54.553 "55d0b407-aa23-4673-a8bb-93296979f826" 00:16:54.553 ], 00:16:54.553 "product_name": "Malloc disk", 00:16:54.553 "block_size": 512, 00:16:54.553 "num_blocks": 65536, 00:16:54.553 "uuid": "55d0b407-aa23-4673-a8bb-93296979f826", 00:16:54.553 "assigned_rate_limits": { 00:16:54.553 "rw_ios_per_sec": 0, 00:16:54.553 "rw_mbytes_per_sec": 0, 00:16:54.553 "r_mbytes_per_sec": 0, 00:16:54.553 "w_mbytes_per_sec": 0 00:16:54.553 }, 00:16:54.553 "claimed": true, 00:16:54.553 "claim_type": "exclusive_write", 00:16:54.553 "zoned": false, 00:16:54.553 "supported_io_types": { 00:16:54.553 "read": true, 00:16:54.553 "write": true, 00:16:54.553 "unmap": true, 00:16:54.553 "write_zeroes": true, 00:16:54.553 "flush": true, 00:16:54.553 "reset": true, 00:16:54.553 "compare": false, 00:16:54.553 "compare_and_write": false, 00:16:54.553 "abort": true, 00:16:54.553 "nvme_admin": false, 00:16:54.553 "nvme_io": false 00:16:54.553 }, 00:16:54.553 "memory_domains": [ 00:16:54.553 { 00:16:54.553 "dma_device_id": "system", 00:16:54.553 "dma_device_type": 1 00:16:54.553 }, 00:16:54.553 { 00:16:54.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.553 "dma_device_type": 2 00:16:54.553 } 00:16:54.553 ], 00:16:54.553 "driver_specific": {} 00:16:54.553 } 00:16:54.553 ] 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.553 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.810 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.810 "name": "Existed_Raid", 00:16:54.810 "uuid": "deafcbec-3c87-4019-8cc8-75c60d6ab93f", 00:16:54.810 "strip_size_kb": 64, 00:16:54.810 "state": "online", 00:16:54.810 "raid_level": "concat", 00:16:54.810 "superblock": false, 00:16:54.810 "num_base_bdevs": 4, 00:16:54.810 "num_base_bdevs_discovered": 4, 00:16:54.810 "num_base_bdevs_operational": 4, 00:16:54.810 "base_bdevs_list": [ 00:16:54.810 { 00:16:54.810 "name": "BaseBdev1", 00:16:54.810 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:54.810 "is_configured": true, 00:16:54.810 "data_offset": 0, 00:16:54.810 "data_size": 65536 00:16:54.810 }, 00:16:54.810 { 00:16:54.810 "name": "BaseBdev2", 00:16:54.810 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:54.810 "is_configured": true, 00:16:54.810 "data_offset": 0, 00:16:54.810 "data_size": 65536 00:16:54.810 }, 00:16:54.810 { 00:16:54.810 "name": "BaseBdev3", 00:16:54.810 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:54.810 "is_configured": true, 00:16:54.810 "data_offset": 0, 00:16:54.810 "data_size": 65536 00:16:54.810 }, 00:16:54.810 { 00:16:54.810 "name": "BaseBdev4", 00:16:54.810 "uuid": "55d0b407-aa23-4673-a8bb-93296979f826", 00:16:54.810 "is_configured": true, 00:16:54.810 "data_offset": 0, 00:16:54.810 "data_size": 65536 00:16:54.810 } 00:16:54.810 ] 00:16:54.810 }' 00:16:54.810 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.810 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:55.373 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:55.630 [2024-05-15 03:11:26.744337] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.630 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:55.630 "name": "Existed_Raid", 00:16:55.630 "aliases": [ 00:16:55.630 "deafcbec-3c87-4019-8cc8-75c60d6ab93f" 00:16:55.630 ], 00:16:55.630 "product_name": "Raid Volume", 00:16:55.630 "block_size": 512, 00:16:55.630 "num_blocks": 262144, 00:16:55.630 "uuid": "deafcbec-3c87-4019-8cc8-75c60d6ab93f", 00:16:55.630 "assigned_rate_limits": { 00:16:55.630 "rw_ios_per_sec": 0, 00:16:55.630 "rw_mbytes_per_sec": 0, 00:16:55.630 "r_mbytes_per_sec": 0, 00:16:55.630 "w_mbytes_per_sec": 0 00:16:55.630 }, 00:16:55.630 "claimed": false, 00:16:55.630 "zoned": false, 00:16:55.630 "supported_io_types": { 00:16:55.630 "read": true, 00:16:55.630 "write": true, 00:16:55.630 "unmap": true, 00:16:55.630 "write_zeroes": true, 00:16:55.630 "flush": true, 00:16:55.630 "reset": true, 00:16:55.630 "compare": false, 00:16:55.630 "compare_and_write": false, 00:16:55.630 "abort": false, 00:16:55.630 "nvme_admin": false, 00:16:55.630 "nvme_io": false 00:16:55.630 }, 00:16:55.630 "memory_domains": [ 00:16:55.630 { 00:16:55.630 "dma_device_id": "system", 00:16:55.630 "dma_device_type": 1 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.630 "dma_device_type": 2 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "system", 00:16:55.630 "dma_device_type": 1 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.630 "dma_device_type": 2 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "system", 00:16:55.630 "dma_device_type": 1 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.630 "dma_device_type": 2 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "system", 00:16:55.630 "dma_device_type": 1 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.630 "dma_device_type": 2 00:16:55.630 } 00:16:55.630 ], 00:16:55.630 "driver_specific": { 00:16:55.630 "raid": { 00:16:55.630 "uuid": "deafcbec-3c87-4019-8cc8-75c60d6ab93f", 00:16:55.630 "strip_size_kb": 64, 00:16:55.630 "state": "online", 00:16:55.630 "raid_level": "concat", 00:16:55.630 "superblock": false, 00:16:55.630 "num_base_bdevs": 4, 00:16:55.630 "num_base_bdevs_discovered": 4, 00:16:55.630 "num_base_bdevs_operational": 4, 00:16:55.630 "base_bdevs_list": [ 00:16:55.630 { 00:16:55.630 "name": "BaseBdev1", 00:16:55.630 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:55.630 "is_configured": true, 00:16:55.630 "data_offset": 0, 00:16:55.630 "data_size": 65536 00:16:55.630 }, 00:16:55.630 { 00:16:55.630 "name": "BaseBdev2", 00:16:55.630 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:55.630 "is_configured": true, 00:16:55.630 "data_offset": 0, 00:16:55.630 "data_size": 65536 00:16:55.631 }, 00:16:55.631 { 00:16:55.631 "name": "BaseBdev3", 00:16:55.631 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:55.631 "is_configured": true, 00:16:55.631 "data_offset": 0, 00:16:55.631 "data_size": 65536 00:16:55.631 }, 00:16:55.631 { 00:16:55.631 "name": "BaseBdev4", 00:16:55.631 "uuid": "55d0b407-aa23-4673-a8bb-93296979f826", 00:16:55.631 "is_configured": true, 00:16:55.631 "data_offset": 0, 00:16:55.631 "data_size": 65536 00:16:55.631 } 00:16:55.631 ] 00:16:55.631 } 00:16:55.631 } 00:16:55.631 }' 00:16:55.631 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.888 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:16:55.888 BaseBdev2 00:16:55.888 BaseBdev3 00:16:55.888 BaseBdev4' 00:16:55.888 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:55.888 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:55.888 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:56.146 "name": "BaseBdev1", 00:16:56.146 "aliases": [ 00:16:56.146 "de24827b-b9b6-47e2-aaa2-538bb5aff6df" 00:16:56.146 ], 00:16:56.146 "product_name": "Malloc disk", 00:16:56.146 "block_size": 512, 00:16:56.146 "num_blocks": 65536, 00:16:56.146 "uuid": "de24827b-b9b6-47e2-aaa2-538bb5aff6df", 00:16:56.146 "assigned_rate_limits": { 00:16:56.146 "rw_ios_per_sec": 0, 00:16:56.146 "rw_mbytes_per_sec": 0, 00:16:56.146 "r_mbytes_per_sec": 0, 00:16:56.146 "w_mbytes_per_sec": 0 00:16:56.146 }, 00:16:56.146 "claimed": true, 00:16:56.146 "claim_type": "exclusive_write", 00:16:56.146 "zoned": false, 00:16:56.146 "supported_io_types": { 00:16:56.146 "read": true, 00:16:56.146 "write": true, 00:16:56.146 "unmap": true, 00:16:56.146 "write_zeroes": true, 00:16:56.146 "flush": true, 00:16:56.146 "reset": true, 00:16:56.146 "compare": false, 00:16:56.146 "compare_and_write": false, 00:16:56.146 "abort": true, 00:16:56.146 "nvme_admin": false, 00:16:56.146 "nvme_io": false 00:16:56.146 }, 00:16:56.146 "memory_domains": [ 00:16:56.146 { 00:16:56.146 "dma_device_id": "system", 00:16:56.146 "dma_device_type": 1 00:16:56.146 }, 00:16:56.146 { 00:16:56.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.146 "dma_device_type": 2 00:16:56.146 } 00:16:56.146 ], 00:16:56.146 "driver_specific": {} 00:16:56.146 }' 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:56.146 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:56.404 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:56.661 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:56.661 "name": "BaseBdev2", 00:16:56.661 "aliases": [ 00:16:56.661 "a67f7b42-91f5-42cf-9e53-468bf5953128" 00:16:56.661 ], 00:16:56.661 "product_name": "Malloc disk", 00:16:56.661 "block_size": 512, 00:16:56.661 "num_blocks": 65536, 00:16:56.661 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:56.661 "assigned_rate_limits": { 00:16:56.661 "rw_ios_per_sec": 0, 00:16:56.661 "rw_mbytes_per_sec": 0, 00:16:56.661 "r_mbytes_per_sec": 0, 00:16:56.661 "w_mbytes_per_sec": 0 00:16:56.661 }, 00:16:56.661 "claimed": true, 00:16:56.661 "claim_type": "exclusive_write", 00:16:56.661 "zoned": false, 00:16:56.661 "supported_io_types": { 00:16:56.661 "read": true, 00:16:56.661 "write": true, 00:16:56.661 "unmap": true, 00:16:56.661 "write_zeroes": true, 00:16:56.661 "flush": true, 00:16:56.661 "reset": true, 00:16:56.661 "compare": false, 00:16:56.661 "compare_and_write": false, 00:16:56.661 "abort": true, 00:16:56.661 "nvme_admin": false, 00:16:56.661 "nvme_io": false 00:16:56.661 }, 00:16:56.661 "memory_domains": [ 00:16:56.661 { 00:16:56.661 "dma_device_id": "system", 00:16:56.661 "dma_device_type": 1 00:16:56.661 }, 00:16:56.661 { 00:16:56.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.661 "dma_device_type": 2 00:16:56.661 } 00:16:56.661 ], 00:16:56.661 "driver_specific": {} 00:16:56.661 }' 00:16:56.661 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:56.661 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:56.661 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:56.661 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.918 03:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:56.918 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:56.918 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:56.918 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:57.175 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:57.175 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:57.433 "name": "BaseBdev3", 00:16:57.433 "aliases": [ 00:16:57.433 "8073c668-1110-4694-9e25-ba1dd1a8f9d9" 00:16:57.433 ], 00:16:57.433 "product_name": "Malloc disk", 00:16:57.433 "block_size": 512, 00:16:57.433 "num_blocks": 65536, 00:16:57.433 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:57.433 "assigned_rate_limits": { 00:16:57.433 "rw_ios_per_sec": 0, 00:16:57.433 "rw_mbytes_per_sec": 0, 00:16:57.433 "r_mbytes_per_sec": 0, 00:16:57.433 "w_mbytes_per_sec": 0 00:16:57.433 }, 00:16:57.433 "claimed": true, 00:16:57.433 "claim_type": "exclusive_write", 00:16:57.433 "zoned": false, 00:16:57.433 "supported_io_types": { 00:16:57.433 "read": true, 00:16:57.433 "write": true, 00:16:57.433 "unmap": true, 00:16:57.433 "write_zeroes": true, 00:16:57.433 "flush": true, 00:16:57.433 "reset": true, 00:16:57.433 "compare": false, 00:16:57.433 "compare_and_write": false, 00:16:57.433 "abort": true, 00:16:57.433 "nvme_admin": false, 00:16:57.433 "nvme_io": false 00:16:57.433 }, 00:16:57.433 "memory_domains": [ 00:16:57.433 { 00:16:57.433 "dma_device_id": "system", 00:16:57.433 "dma_device_type": 1 00:16:57.433 }, 00:16:57.433 { 00:16:57.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.433 "dma_device_type": 2 00:16:57.433 } 00:16:57.433 ], 00:16:57.433 "driver_specific": {} 00:16:57.433 }' 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:57.433 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:57.691 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:57.949 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:57.949 "name": "BaseBdev4", 00:16:57.949 "aliases": [ 00:16:57.949 "55d0b407-aa23-4673-a8bb-93296979f826" 00:16:57.949 ], 00:16:57.949 "product_name": "Malloc disk", 00:16:57.949 "block_size": 512, 00:16:57.949 "num_blocks": 65536, 00:16:57.949 "uuid": "55d0b407-aa23-4673-a8bb-93296979f826", 00:16:57.949 "assigned_rate_limits": { 00:16:57.949 "rw_ios_per_sec": 0, 00:16:57.949 "rw_mbytes_per_sec": 0, 00:16:57.949 "r_mbytes_per_sec": 0, 00:16:57.949 "w_mbytes_per_sec": 0 00:16:57.949 }, 00:16:57.949 "claimed": true, 00:16:57.949 "claim_type": "exclusive_write", 00:16:57.949 "zoned": false, 00:16:57.949 "supported_io_types": { 00:16:57.949 "read": true, 00:16:57.949 "write": true, 00:16:57.949 "unmap": true, 00:16:57.949 "write_zeroes": true, 00:16:57.949 "flush": true, 00:16:57.949 "reset": true, 00:16:57.949 "compare": false, 00:16:57.949 "compare_and_write": false, 00:16:57.949 "abort": true, 00:16:57.949 "nvme_admin": false, 00:16:57.949 "nvme_io": false 00:16:57.949 }, 00:16:57.949 "memory_domains": [ 00:16:57.949 { 00:16:57.949 "dma_device_id": "system", 00:16:57.949 "dma_device_type": 1 00:16:57.949 }, 00:16:57.949 { 00:16:57.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.949 "dma_device_type": 2 00:16:57.949 } 00:16:57.949 ], 00:16:57.949 "driver_specific": {} 00:16:57.949 }' 00:16:57.949 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:57.949 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:57.949 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:57.949 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:57.949 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:58.207 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:58.465 [2024-05-15 03:11:29.543555] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.465 [2024-05-15 03:11:29.543579] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.465 [2024-05-15 03:11:29.543623] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.465 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.722 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.722 "name": "Existed_Raid", 00:16:58.722 "uuid": "deafcbec-3c87-4019-8cc8-75c60d6ab93f", 00:16:58.722 "strip_size_kb": 64, 00:16:58.722 "state": "offline", 00:16:58.722 "raid_level": "concat", 00:16:58.722 "superblock": false, 00:16:58.722 "num_base_bdevs": 4, 00:16:58.722 "num_base_bdevs_discovered": 3, 00:16:58.722 "num_base_bdevs_operational": 3, 00:16:58.722 "base_bdevs_list": [ 00:16:58.722 { 00:16:58.722 "name": null, 00:16:58.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.722 "is_configured": false, 00:16:58.722 "data_offset": 0, 00:16:58.722 "data_size": 65536 00:16:58.722 }, 00:16:58.722 { 00:16:58.722 "name": "BaseBdev2", 00:16:58.722 "uuid": "a67f7b42-91f5-42cf-9e53-468bf5953128", 00:16:58.722 "is_configured": true, 00:16:58.722 "data_offset": 0, 00:16:58.722 "data_size": 65536 00:16:58.722 }, 00:16:58.722 { 00:16:58.722 "name": "BaseBdev3", 00:16:58.722 "uuid": "8073c668-1110-4694-9e25-ba1dd1a8f9d9", 00:16:58.722 "is_configured": true, 00:16:58.722 "data_offset": 0, 00:16:58.722 "data_size": 65536 00:16:58.722 }, 00:16:58.722 { 00:16:58.722 "name": "BaseBdev4", 00:16:58.722 "uuid": "55d0b407-aa23-4673-a8bb-93296979f826", 00:16:58.722 "is_configured": true, 00:16:58.722 "data_offset": 0, 00:16:58.722 "data_size": 65536 00:16:58.722 } 00:16:58.722 ] 00:16:58.722 }' 00:16:58.723 03:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.723 03:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.655 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:59.912 [2024-05-15 03:11:30.944535] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.913 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:59.913 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.913 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.913 03:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:00.170 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:00.170 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.170 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:00.428 [2024-05-15 03:11:31.460212] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:00.428 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:00.428 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:00.428 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.428 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:00.687 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:00.687 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.687 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:00.944 [2024-05-15 03:11:31.887785] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:00.944 [2024-05-15 03:11:31.887824] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1576670 name Existed_Raid, state offline 00:17:00.944 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:00.944 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:00.944 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.944 03:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:01.202 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:01.459 BaseBdev2 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:01.459 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.727 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.001 [ 00:17:02.001 { 00:17:02.001 "name": "BaseBdev2", 00:17:02.001 "aliases": [ 00:17:02.001 "096ab0ea-bfc4-498f-847a-1ee2de5bff02" 00:17:02.001 ], 00:17:02.001 "product_name": "Malloc disk", 00:17:02.001 "block_size": 512, 00:17:02.001 "num_blocks": 65536, 00:17:02.001 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:02.001 "assigned_rate_limits": { 00:17:02.001 "rw_ios_per_sec": 0, 00:17:02.001 "rw_mbytes_per_sec": 0, 00:17:02.001 "r_mbytes_per_sec": 0, 00:17:02.001 "w_mbytes_per_sec": 0 00:17:02.001 }, 00:17:02.001 "claimed": false, 00:17:02.001 "zoned": false, 00:17:02.001 "supported_io_types": { 00:17:02.001 "read": true, 00:17:02.001 "write": true, 00:17:02.001 "unmap": true, 00:17:02.001 "write_zeroes": true, 00:17:02.001 "flush": true, 00:17:02.001 "reset": true, 00:17:02.001 "compare": false, 00:17:02.001 "compare_and_write": false, 00:17:02.001 "abort": true, 00:17:02.001 "nvme_admin": false, 00:17:02.001 "nvme_io": false 00:17:02.001 }, 00:17:02.001 "memory_domains": [ 00:17:02.001 { 00:17:02.001 "dma_device_id": "system", 00:17:02.001 "dma_device_type": 1 00:17:02.001 }, 00:17:02.001 { 00:17:02.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.001 "dma_device_type": 2 00:17:02.001 } 00:17:02.001 ], 00:17:02.001 "driver_specific": {} 00:17:02.001 } 00:17:02.001 ] 00:17:02.001 03:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:02.001 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:02.001 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:02.001 03:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:02.001 BaseBdev3 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:02.259 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.516 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:02.516 [ 00:17:02.516 { 00:17:02.516 "name": "BaseBdev3", 00:17:02.516 "aliases": [ 00:17:02.516 "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf" 00:17:02.516 ], 00:17:02.516 "product_name": "Malloc disk", 00:17:02.516 "block_size": 512, 00:17:02.516 "num_blocks": 65536, 00:17:02.516 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:02.516 "assigned_rate_limits": { 00:17:02.516 "rw_ios_per_sec": 0, 00:17:02.516 "rw_mbytes_per_sec": 0, 00:17:02.516 "r_mbytes_per_sec": 0, 00:17:02.516 "w_mbytes_per_sec": 0 00:17:02.516 }, 00:17:02.516 "claimed": false, 00:17:02.516 "zoned": false, 00:17:02.516 "supported_io_types": { 00:17:02.516 "read": true, 00:17:02.516 "write": true, 00:17:02.516 "unmap": true, 00:17:02.516 "write_zeroes": true, 00:17:02.516 "flush": true, 00:17:02.516 "reset": true, 00:17:02.516 "compare": false, 00:17:02.516 "compare_and_write": false, 00:17:02.516 "abort": true, 00:17:02.516 "nvme_admin": false, 00:17:02.516 "nvme_io": false 00:17:02.516 }, 00:17:02.516 "memory_domains": [ 00:17:02.516 { 00:17:02.516 "dma_device_id": "system", 00:17:02.516 "dma_device_type": 1 00:17:02.516 }, 00:17:02.516 { 00:17:02.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.516 "dma_device_type": 2 00:17:02.516 } 00:17:02.516 ], 00:17:02.516 "driver_specific": {} 00:17:02.516 } 00:17:02.516 ] 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:02.773 BaseBdev4 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:02.773 03:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.029 03:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:03.287 [ 00:17:03.287 { 00:17:03.287 "name": "BaseBdev4", 00:17:03.287 "aliases": [ 00:17:03.287 "2690244e-d795-4606-924e-27730f3bdb8b" 00:17:03.287 ], 00:17:03.287 "product_name": "Malloc disk", 00:17:03.287 "block_size": 512, 00:17:03.287 "num_blocks": 65536, 00:17:03.287 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:03.287 "assigned_rate_limits": { 00:17:03.287 "rw_ios_per_sec": 0, 00:17:03.287 "rw_mbytes_per_sec": 0, 00:17:03.287 "r_mbytes_per_sec": 0, 00:17:03.287 "w_mbytes_per_sec": 0 00:17:03.287 }, 00:17:03.287 "claimed": false, 00:17:03.287 "zoned": false, 00:17:03.287 "supported_io_types": { 00:17:03.287 "read": true, 00:17:03.287 "write": true, 00:17:03.287 "unmap": true, 00:17:03.287 "write_zeroes": true, 00:17:03.287 "flush": true, 00:17:03.287 "reset": true, 00:17:03.287 "compare": false, 00:17:03.287 "compare_and_write": false, 00:17:03.287 "abort": true, 00:17:03.287 "nvme_admin": false, 00:17:03.287 "nvme_io": false 00:17:03.287 }, 00:17:03.287 "memory_domains": [ 00:17:03.287 { 00:17:03.287 "dma_device_id": "system", 00:17:03.287 "dma_device_type": 1 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.287 "dma_device_type": 2 00:17:03.287 } 00:17:03.287 ], 00:17:03.287 "driver_specific": {} 00:17:03.287 } 00:17:03.287 ] 00:17:03.287 03:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:03.287 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:03.287 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:03.287 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:03.544 [2024-05-15 03:11:34.646813] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.544 [2024-05-15 03:11:34.646853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.544 [2024-05-15 03:11:34.646871] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.544 [2024-05-15 03:11:34.648257] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.544 [2024-05-15 03:11:34.648299] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.544 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.545 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.545 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.802 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.802 "name": "Existed_Raid", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.802 "strip_size_kb": 64, 00:17:03.802 "state": "configuring", 00:17:03.802 "raid_level": "concat", 00:17:03.802 "superblock": false, 00:17:03.802 "num_base_bdevs": 4, 00:17:03.802 "num_base_bdevs_discovered": 3, 00:17:03.802 "num_base_bdevs_operational": 4, 00:17:03.802 "base_bdevs_list": [ 00:17:03.802 { 00:17:03.802 "name": "BaseBdev1", 00:17:03.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.802 "is_configured": false, 00:17:03.802 "data_offset": 0, 00:17:03.802 "data_size": 0 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "BaseBdev2", 00:17:03.802 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 0, 00:17:03.802 "data_size": 65536 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "BaseBdev3", 00:17:03.802 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 0, 00:17:03.802 "data_size": 65536 00:17:03.802 }, 00:17:03.802 { 00:17:03.802 "name": "BaseBdev4", 00:17:03.802 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:03.802 "is_configured": true, 00:17:03.802 "data_offset": 0, 00:17:03.802 "data_size": 65536 00:17:03.802 } 00:17:03.802 ] 00:17:03.802 }' 00:17:03.802 03:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.802 03:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.367 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:04.625 [2024-05-15 03:11:35.685719] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.625 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.883 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.883 "name": "Existed_Raid", 00:17:04.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.883 "strip_size_kb": 64, 00:17:04.883 "state": "configuring", 00:17:04.883 "raid_level": "concat", 00:17:04.883 "superblock": false, 00:17:04.883 "num_base_bdevs": 4, 00:17:04.883 "num_base_bdevs_discovered": 2, 00:17:04.883 "num_base_bdevs_operational": 4, 00:17:04.883 "base_bdevs_list": [ 00:17:04.883 { 00:17:04.883 "name": "BaseBdev1", 00:17:04.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.883 "is_configured": false, 00:17:04.883 "data_offset": 0, 00:17:04.883 "data_size": 0 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": null, 00:17:04.883 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:04.883 "is_configured": false, 00:17:04.883 "data_offset": 0, 00:17:04.883 "data_size": 65536 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": "BaseBdev3", 00:17:04.883 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 0, 00:17:04.883 "data_size": 65536 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": "BaseBdev4", 00:17:04.883 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 0, 00:17:04.883 "data_size": 65536 00:17:04.883 } 00:17:04.883 ] 00:17:04.883 }' 00:17:04.883 03:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.883 03:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.446 03:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.446 03:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:05.702 03:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:17:05.702 03:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.959 [2024-05-15 03:11:37.072597] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.959 BaseBdev1 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:05.959 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.216 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.474 [ 00:17:06.474 { 00:17:06.474 "name": "BaseBdev1", 00:17:06.474 "aliases": [ 00:17:06.474 "1c533323-48d1-4e4c-98d5-da573c9ccbd0" 00:17:06.474 ], 00:17:06.474 "product_name": "Malloc disk", 00:17:06.474 "block_size": 512, 00:17:06.474 "num_blocks": 65536, 00:17:06.474 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:06.474 "assigned_rate_limits": { 00:17:06.474 "rw_ios_per_sec": 0, 00:17:06.474 "rw_mbytes_per_sec": 0, 00:17:06.474 "r_mbytes_per_sec": 0, 00:17:06.474 "w_mbytes_per_sec": 0 00:17:06.474 }, 00:17:06.474 "claimed": true, 00:17:06.474 "claim_type": "exclusive_write", 00:17:06.474 "zoned": false, 00:17:06.474 "supported_io_types": { 00:17:06.474 "read": true, 00:17:06.474 "write": true, 00:17:06.474 "unmap": true, 00:17:06.474 "write_zeroes": true, 00:17:06.474 "flush": true, 00:17:06.474 "reset": true, 00:17:06.474 "compare": false, 00:17:06.474 "compare_and_write": false, 00:17:06.474 "abort": true, 00:17:06.474 "nvme_admin": false, 00:17:06.474 "nvme_io": false 00:17:06.474 }, 00:17:06.474 "memory_domains": [ 00:17:06.474 { 00:17:06.474 "dma_device_id": "system", 00:17:06.474 "dma_device_type": 1 00:17:06.474 }, 00:17:06.474 { 00:17:06.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.474 "dma_device_type": 2 00:17:06.474 } 00:17:06.474 ], 00:17:06.474 "driver_specific": {} 00:17:06.474 } 00:17:06.474 ] 00:17:06.474 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:06.474 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:06.474 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.475 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.732 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.732 "name": "Existed_Raid", 00:17:06.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.732 "strip_size_kb": 64, 00:17:06.732 "state": "configuring", 00:17:06.732 "raid_level": "concat", 00:17:06.732 "superblock": false, 00:17:06.732 "num_base_bdevs": 4, 00:17:06.732 "num_base_bdevs_discovered": 3, 00:17:06.732 "num_base_bdevs_operational": 4, 00:17:06.732 "base_bdevs_list": [ 00:17:06.732 { 00:17:06.732 "name": "BaseBdev1", 00:17:06.732 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:06.732 "is_configured": true, 00:17:06.732 "data_offset": 0, 00:17:06.733 "data_size": 65536 00:17:06.733 }, 00:17:06.733 { 00:17:06.733 "name": null, 00:17:06.733 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:06.733 "is_configured": false, 00:17:06.733 "data_offset": 0, 00:17:06.733 "data_size": 65536 00:17:06.733 }, 00:17:06.733 { 00:17:06.733 "name": "BaseBdev3", 00:17:06.733 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:06.733 "is_configured": true, 00:17:06.733 "data_offset": 0, 00:17:06.733 "data_size": 65536 00:17:06.733 }, 00:17:06.733 { 00:17:06.733 "name": "BaseBdev4", 00:17:06.733 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:06.733 "is_configured": true, 00:17:06.733 "data_offset": 0, 00:17:06.733 "data_size": 65536 00:17:06.733 } 00:17:06.733 ] 00:17:06.733 }' 00:17:06.733 03:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.733 03:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.298 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.298 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:07.557 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:07.557 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:07.815 [2024-05-15 03:11:38.873444] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.815 03:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.074 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.074 "name": "Existed_Raid", 00:17:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.074 "strip_size_kb": 64, 00:17:08.074 "state": "configuring", 00:17:08.074 "raid_level": "concat", 00:17:08.074 "superblock": false, 00:17:08.074 "num_base_bdevs": 4, 00:17:08.074 "num_base_bdevs_discovered": 2, 00:17:08.074 "num_base_bdevs_operational": 4, 00:17:08.074 "base_bdevs_list": [ 00:17:08.074 { 00:17:08.074 "name": "BaseBdev1", 00:17:08.074 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:08.074 "is_configured": true, 00:17:08.074 "data_offset": 0, 00:17:08.074 "data_size": 65536 00:17:08.074 }, 00:17:08.074 { 00:17:08.074 "name": null, 00:17:08.074 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:08.074 "is_configured": false, 00:17:08.074 "data_offset": 0, 00:17:08.074 "data_size": 65536 00:17:08.074 }, 00:17:08.074 { 00:17:08.074 "name": null, 00:17:08.074 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:08.074 "is_configured": false, 00:17:08.074 "data_offset": 0, 00:17:08.074 "data_size": 65536 00:17:08.074 }, 00:17:08.074 { 00:17:08.074 "name": "BaseBdev4", 00:17:08.074 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:08.074 "is_configured": true, 00:17:08.074 "data_offset": 0, 00:17:08.074 "data_size": 65536 00:17:08.074 } 00:17:08.074 ] 00:17:08.074 }' 00:17:08.074 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.074 03:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.640 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.898 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:17:08.898 03:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:09.155 [2024-05-15 03:11:40.140883] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.155 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.413 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.413 "name": "Existed_Raid", 00:17:09.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.413 "strip_size_kb": 64, 00:17:09.413 "state": "configuring", 00:17:09.413 "raid_level": "concat", 00:17:09.413 "superblock": false, 00:17:09.413 "num_base_bdevs": 4, 00:17:09.413 "num_base_bdevs_discovered": 3, 00:17:09.413 "num_base_bdevs_operational": 4, 00:17:09.413 "base_bdevs_list": [ 00:17:09.413 { 00:17:09.413 "name": "BaseBdev1", 00:17:09.413 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:09.413 "is_configured": true, 00:17:09.413 "data_offset": 0, 00:17:09.413 "data_size": 65536 00:17:09.413 }, 00:17:09.413 { 00:17:09.413 "name": null, 00:17:09.413 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:09.413 "is_configured": false, 00:17:09.413 "data_offset": 0, 00:17:09.413 "data_size": 65536 00:17:09.413 }, 00:17:09.413 { 00:17:09.413 "name": "BaseBdev3", 00:17:09.413 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:09.413 "is_configured": true, 00:17:09.413 "data_offset": 0, 00:17:09.413 "data_size": 65536 00:17:09.413 }, 00:17:09.413 { 00:17:09.413 "name": "BaseBdev4", 00:17:09.413 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:09.413 "is_configured": true, 00:17:09.413 "data_offset": 0, 00:17:09.413 "data_size": 65536 00:17:09.413 } 00:17:09.413 ] 00:17:09.413 }' 00:17:09.413 03:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.413 03:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.979 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.979 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:10.237 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:17:10.237 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:10.495 [2024-05-15 03:11:41.516567] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.495 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.753 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.753 "name": "Existed_Raid", 00:17:10.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.753 "strip_size_kb": 64, 00:17:10.753 "state": "configuring", 00:17:10.753 "raid_level": "concat", 00:17:10.753 "superblock": false, 00:17:10.753 "num_base_bdevs": 4, 00:17:10.753 "num_base_bdevs_discovered": 2, 00:17:10.753 "num_base_bdevs_operational": 4, 00:17:10.753 "base_bdevs_list": [ 00:17:10.753 { 00:17:10.753 "name": null, 00:17:10.753 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:10.753 "is_configured": false, 00:17:10.753 "data_offset": 0, 00:17:10.753 "data_size": 65536 00:17:10.753 }, 00:17:10.753 { 00:17:10.753 "name": null, 00:17:10.753 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:10.753 "is_configured": false, 00:17:10.753 "data_offset": 0, 00:17:10.753 "data_size": 65536 00:17:10.753 }, 00:17:10.753 { 00:17:10.753 "name": "BaseBdev3", 00:17:10.753 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:10.753 "is_configured": true, 00:17:10.753 "data_offset": 0, 00:17:10.753 "data_size": 65536 00:17:10.753 }, 00:17:10.753 { 00:17:10.753 "name": "BaseBdev4", 00:17:10.753 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:10.753 "is_configured": true, 00:17:10.753 "data_offset": 0, 00:17:10.753 "data_size": 65536 00:17:10.753 } 00:17:10.753 ] 00:17:10.753 }' 00:17:10.753 03:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.753 03:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.321 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.321 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:11.578 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:17:11.578 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:11.836 [2024-05-15 03:11:42.926867] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.836 03:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.094 03:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.094 "name": "Existed_Raid", 00:17:12.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.094 "strip_size_kb": 64, 00:17:12.094 "state": "configuring", 00:17:12.094 "raid_level": "concat", 00:17:12.094 "superblock": false, 00:17:12.094 "num_base_bdevs": 4, 00:17:12.094 "num_base_bdevs_discovered": 3, 00:17:12.094 "num_base_bdevs_operational": 4, 00:17:12.094 "base_bdevs_list": [ 00:17:12.094 { 00:17:12.094 "name": null, 00:17:12.094 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:12.094 "is_configured": false, 00:17:12.094 "data_offset": 0, 00:17:12.094 "data_size": 65536 00:17:12.094 }, 00:17:12.094 { 00:17:12.094 "name": "BaseBdev2", 00:17:12.094 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:12.094 "is_configured": true, 00:17:12.094 "data_offset": 0, 00:17:12.094 "data_size": 65536 00:17:12.094 }, 00:17:12.094 { 00:17:12.094 "name": "BaseBdev3", 00:17:12.094 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:12.094 "is_configured": true, 00:17:12.094 "data_offset": 0, 00:17:12.094 "data_size": 65536 00:17:12.094 }, 00:17:12.094 { 00:17:12.094 "name": "BaseBdev4", 00:17:12.094 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:12.094 "is_configured": true, 00:17:12.094 "data_offset": 0, 00:17:12.094 "data_size": 65536 00:17:12.094 } 00:17:12.094 ] 00:17:12.094 }' 00:17:12.094 03:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.094 03:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.026 03:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.026 03:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:13.026 03:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:17:13.026 03:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.026 03:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:13.284 03:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1c533323-48d1-4e4c-98d5-da573c9ccbd0 00:17:13.541 [2024-05-15 03:11:44.582430] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:13.541 [2024-05-15 03:11:44.582463] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x15773e0 00:17:13.541 [2024-05-15 03:11:44.582470] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:13.541 [2024-05-15 03:11:44.582663] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1563610 00:17:13.541 [2024-05-15 03:11:44.582786] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15773e0 00:17:13.541 [2024-05-15 03:11:44.582795] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15773e0 00:17:13.541 [2024-05-15 03:11:44.582967] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.541 NewBaseBdev 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:13.541 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.800 03:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:14.058 [ 00:17:14.058 { 00:17:14.058 "name": "NewBaseBdev", 00:17:14.058 "aliases": [ 00:17:14.058 "1c533323-48d1-4e4c-98d5-da573c9ccbd0" 00:17:14.058 ], 00:17:14.058 "product_name": "Malloc disk", 00:17:14.058 "block_size": 512, 00:17:14.058 "num_blocks": 65536, 00:17:14.058 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:14.058 "assigned_rate_limits": { 00:17:14.058 "rw_ios_per_sec": 0, 00:17:14.058 "rw_mbytes_per_sec": 0, 00:17:14.058 "r_mbytes_per_sec": 0, 00:17:14.058 "w_mbytes_per_sec": 0 00:17:14.058 }, 00:17:14.058 "claimed": true, 00:17:14.058 "claim_type": "exclusive_write", 00:17:14.058 "zoned": false, 00:17:14.058 "supported_io_types": { 00:17:14.058 "read": true, 00:17:14.058 "write": true, 00:17:14.058 "unmap": true, 00:17:14.058 "write_zeroes": true, 00:17:14.058 "flush": true, 00:17:14.058 "reset": true, 00:17:14.058 "compare": false, 00:17:14.058 "compare_and_write": false, 00:17:14.058 "abort": true, 00:17:14.058 "nvme_admin": false, 00:17:14.058 "nvme_io": false 00:17:14.058 }, 00:17:14.058 "memory_domains": [ 00:17:14.058 { 00:17:14.058 "dma_device_id": "system", 00:17:14.058 "dma_device_type": 1 00:17:14.058 }, 00:17:14.058 { 00:17:14.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.058 "dma_device_type": 2 00:17:14.058 } 00:17:14.058 ], 00:17:14.058 "driver_specific": {} 00:17:14.058 } 00:17:14.058 ] 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.058 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.316 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.316 "name": "Existed_Raid", 00:17:14.316 "uuid": "6b220408-f26d-404e-8d4d-574f8b51f852", 00:17:14.316 "strip_size_kb": 64, 00:17:14.316 "state": "online", 00:17:14.316 "raid_level": "concat", 00:17:14.316 "superblock": false, 00:17:14.316 "num_base_bdevs": 4, 00:17:14.316 "num_base_bdevs_discovered": 4, 00:17:14.316 "num_base_bdevs_operational": 4, 00:17:14.316 "base_bdevs_list": [ 00:17:14.316 { 00:17:14.316 "name": "NewBaseBdev", 00:17:14.316 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:14.316 "is_configured": true, 00:17:14.316 "data_offset": 0, 00:17:14.316 "data_size": 65536 00:17:14.316 }, 00:17:14.316 { 00:17:14.316 "name": "BaseBdev2", 00:17:14.316 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:14.316 "is_configured": true, 00:17:14.316 "data_offset": 0, 00:17:14.316 "data_size": 65536 00:17:14.316 }, 00:17:14.316 { 00:17:14.316 "name": "BaseBdev3", 00:17:14.316 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:14.316 "is_configured": true, 00:17:14.316 "data_offset": 0, 00:17:14.316 "data_size": 65536 00:17:14.316 }, 00:17:14.316 { 00:17:14.316 "name": "BaseBdev4", 00:17:14.316 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:14.316 "is_configured": true, 00:17:14.316 "data_offset": 0, 00:17:14.316 "data_size": 65536 00:17:14.316 } 00:17:14.316 ] 00:17:14.316 }' 00:17:14.316 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.316 03:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:14.881 03:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:15.139 [2024-05-15 03:11:46.187043] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.139 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:15.139 "name": "Existed_Raid", 00:17:15.139 "aliases": [ 00:17:15.139 "6b220408-f26d-404e-8d4d-574f8b51f852" 00:17:15.139 ], 00:17:15.139 "product_name": "Raid Volume", 00:17:15.139 "block_size": 512, 00:17:15.139 "num_blocks": 262144, 00:17:15.139 "uuid": "6b220408-f26d-404e-8d4d-574f8b51f852", 00:17:15.139 "assigned_rate_limits": { 00:17:15.139 "rw_ios_per_sec": 0, 00:17:15.139 "rw_mbytes_per_sec": 0, 00:17:15.139 "r_mbytes_per_sec": 0, 00:17:15.139 "w_mbytes_per_sec": 0 00:17:15.139 }, 00:17:15.139 "claimed": false, 00:17:15.139 "zoned": false, 00:17:15.139 "supported_io_types": { 00:17:15.139 "read": true, 00:17:15.139 "write": true, 00:17:15.139 "unmap": true, 00:17:15.139 "write_zeroes": true, 00:17:15.139 "flush": true, 00:17:15.139 "reset": true, 00:17:15.139 "compare": false, 00:17:15.139 "compare_and_write": false, 00:17:15.139 "abort": false, 00:17:15.139 "nvme_admin": false, 00:17:15.139 "nvme_io": false 00:17:15.139 }, 00:17:15.139 "memory_domains": [ 00:17:15.139 { 00:17:15.139 "dma_device_id": "system", 00:17:15.139 "dma_device_type": 1 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.139 "dma_device_type": 2 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "system", 00:17:15.139 "dma_device_type": 1 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.139 "dma_device_type": 2 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "system", 00:17:15.139 "dma_device_type": 1 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.139 "dma_device_type": 2 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "system", 00:17:15.139 "dma_device_type": 1 00:17:15.139 }, 00:17:15.139 { 00:17:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.139 "dma_device_type": 2 00:17:15.139 } 00:17:15.139 ], 00:17:15.139 "driver_specific": { 00:17:15.139 "raid": { 00:17:15.139 "uuid": "6b220408-f26d-404e-8d4d-574f8b51f852", 00:17:15.139 "strip_size_kb": 64, 00:17:15.139 "state": "online", 00:17:15.139 "raid_level": "concat", 00:17:15.139 "superblock": false, 00:17:15.140 "num_base_bdevs": 4, 00:17:15.140 "num_base_bdevs_discovered": 4, 00:17:15.140 "num_base_bdevs_operational": 4, 00:17:15.140 "base_bdevs_list": [ 00:17:15.140 { 00:17:15.140 "name": "NewBaseBdev", 00:17:15.140 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev2", 00:17:15.140 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev3", 00:17:15.140 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev4", 00:17:15.140 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 } 00:17:15.140 ] 00:17:15.140 } 00:17:15.140 } 00:17:15.140 }' 00:17:15.140 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.140 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:17:15.140 BaseBdev2 00:17:15.140 BaseBdev3 00:17:15.140 BaseBdev4' 00:17:15.140 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:15.140 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:15.140 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:15.406 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:15.406 "name": "NewBaseBdev", 00:17:15.406 "aliases": [ 00:17:15.406 "1c533323-48d1-4e4c-98d5-da573c9ccbd0" 00:17:15.406 ], 00:17:15.406 "product_name": "Malloc disk", 00:17:15.406 "block_size": 512, 00:17:15.406 "num_blocks": 65536, 00:17:15.406 "uuid": "1c533323-48d1-4e4c-98d5-da573c9ccbd0", 00:17:15.406 "assigned_rate_limits": { 00:17:15.406 "rw_ios_per_sec": 0, 00:17:15.406 "rw_mbytes_per_sec": 0, 00:17:15.406 "r_mbytes_per_sec": 0, 00:17:15.406 "w_mbytes_per_sec": 0 00:17:15.406 }, 00:17:15.406 "claimed": true, 00:17:15.406 "claim_type": "exclusive_write", 00:17:15.406 "zoned": false, 00:17:15.406 "supported_io_types": { 00:17:15.406 "read": true, 00:17:15.406 "write": true, 00:17:15.406 "unmap": true, 00:17:15.406 "write_zeroes": true, 00:17:15.406 "flush": true, 00:17:15.406 "reset": true, 00:17:15.406 "compare": false, 00:17:15.406 "compare_and_write": false, 00:17:15.406 "abort": true, 00:17:15.406 "nvme_admin": false, 00:17:15.406 "nvme_io": false 00:17:15.406 }, 00:17:15.406 "memory_domains": [ 00:17:15.406 { 00:17:15.406 "dma_device_id": "system", 00:17:15.406 "dma_device_type": 1 00:17:15.406 }, 00:17:15.406 { 00:17:15.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.406 "dma_device_type": 2 00:17:15.406 } 00:17:15.406 ], 00:17:15.406 "driver_specific": {} 00:17:15.406 }' 00:17:15.406 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.691 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:15.961 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:15.961 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:15.961 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:15.961 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:15.961 03:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:16.219 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:16.219 "name": "BaseBdev2", 00:17:16.219 "aliases": [ 00:17:16.219 "096ab0ea-bfc4-498f-847a-1ee2de5bff02" 00:17:16.219 ], 00:17:16.219 "product_name": "Malloc disk", 00:17:16.219 "block_size": 512, 00:17:16.219 "num_blocks": 65536, 00:17:16.219 "uuid": "096ab0ea-bfc4-498f-847a-1ee2de5bff02", 00:17:16.219 "assigned_rate_limits": { 00:17:16.219 "rw_ios_per_sec": 0, 00:17:16.219 "rw_mbytes_per_sec": 0, 00:17:16.220 "r_mbytes_per_sec": 0, 00:17:16.220 "w_mbytes_per_sec": 0 00:17:16.220 }, 00:17:16.220 "claimed": true, 00:17:16.220 "claim_type": "exclusive_write", 00:17:16.220 "zoned": false, 00:17:16.220 "supported_io_types": { 00:17:16.220 "read": true, 00:17:16.220 "write": true, 00:17:16.220 "unmap": true, 00:17:16.220 "write_zeroes": true, 00:17:16.220 "flush": true, 00:17:16.220 "reset": true, 00:17:16.220 "compare": false, 00:17:16.220 "compare_and_write": false, 00:17:16.220 "abort": true, 00:17:16.220 "nvme_admin": false, 00:17:16.220 "nvme_io": false 00:17:16.220 }, 00:17:16.220 "memory_domains": [ 00:17:16.220 { 00:17:16.220 "dma_device_id": "system", 00:17:16.220 "dma_device_type": 1 00:17:16.220 }, 00:17:16.220 { 00:17:16.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.220 "dma_device_type": 2 00:17:16.220 } 00:17:16.220 ], 00:17:16.220 "driver_specific": {} 00:17:16.220 }' 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.220 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:16.478 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:16.736 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:16.736 "name": "BaseBdev3", 00:17:16.736 "aliases": [ 00:17:16.736 "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf" 00:17:16.736 ], 00:17:16.736 "product_name": "Malloc disk", 00:17:16.736 "block_size": 512, 00:17:16.736 "num_blocks": 65536, 00:17:16.736 "uuid": "4cc4a3cd-73d9-45fe-91b0-aa704b7d87cf", 00:17:16.736 "assigned_rate_limits": { 00:17:16.736 "rw_ios_per_sec": 0, 00:17:16.736 "rw_mbytes_per_sec": 0, 00:17:16.736 "r_mbytes_per_sec": 0, 00:17:16.736 "w_mbytes_per_sec": 0 00:17:16.736 }, 00:17:16.736 "claimed": true, 00:17:16.736 "claim_type": "exclusive_write", 00:17:16.736 "zoned": false, 00:17:16.736 "supported_io_types": { 00:17:16.736 "read": true, 00:17:16.736 "write": true, 00:17:16.736 "unmap": true, 00:17:16.736 "write_zeroes": true, 00:17:16.736 "flush": true, 00:17:16.736 "reset": true, 00:17:16.736 "compare": false, 00:17:16.736 "compare_and_write": false, 00:17:16.736 "abort": true, 00:17:16.736 "nvme_admin": false, 00:17:16.736 "nvme_io": false 00:17:16.736 }, 00:17:16.736 "memory_domains": [ 00:17:16.736 { 00:17:16.736 "dma_device_id": "system", 00:17:16.736 "dma_device_type": 1 00:17:16.736 }, 00:17:16.736 { 00:17:16.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.736 "dma_device_type": 2 00:17:16.736 } 00:17:16.736 ], 00:17:16.736 "driver_specific": {} 00:17:16.736 }' 00:17:16.736 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:16.736 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:16.736 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:16.736 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:16.994 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:16.994 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.994 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:16.994 03:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:16.994 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:17.253 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:17.253 "name": "BaseBdev4", 00:17:17.253 "aliases": [ 00:17:17.253 "2690244e-d795-4606-924e-27730f3bdb8b" 00:17:17.253 ], 00:17:17.253 "product_name": "Malloc disk", 00:17:17.253 "block_size": 512, 00:17:17.253 "num_blocks": 65536, 00:17:17.253 "uuid": "2690244e-d795-4606-924e-27730f3bdb8b", 00:17:17.253 "assigned_rate_limits": { 00:17:17.253 "rw_ios_per_sec": 0, 00:17:17.253 "rw_mbytes_per_sec": 0, 00:17:17.253 "r_mbytes_per_sec": 0, 00:17:17.253 "w_mbytes_per_sec": 0 00:17:17.253 }, 00:17:17.253 "claimed": true, 00:17:17.253 "claim_type": "exclusive_write", 00:17:17.253 "zoned": false, 00:17:17.253 "supported_io_types": { 00:17:17.253 "read": true, 00:17:17.253 "write": true, 00:17:17.253 "unmap": true, 00:17:17.253 "write_zeroes": true, 00:17:17.253 "flush": true, 00:17:17.253 "reset": true, 00:17:17.253 "compare": false, 00:17:17.253 "compare_and_write": false, 00:17:17.253 "abort": true, 00:17:17.253 "nvme_admin": false, 00:17:17.253 "nvme_io": false 00:17:17.253 }, 00:17:17.253 "memory_domains": [ 00:17:17.253 { 00:17:17.253 "dma_device_id": "system", 00:17:17.253 "dma_device_type": 1 00:17:17.253 }, 00:17:17.253 { 00:17:17.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.253 "dma_device_type": 2 00:17:17.253 } 00:17:17.253 ], 00:17:17.253 "driver_specific": {} 00:17:17.253 }' 00:17:17.253 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.511 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:17.769 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:17.769 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:17.769 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.027 [2024-05-15 03:11:48.978210] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.027 [2024-05-15 03:11:48.978235] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.027 [2024-05-15 03:11:48.978281] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.027 [2024-05-15 03:11:48.978341] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.027 [2024-05-15 03:11:48.978351] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15773e0 name Existed_Raid, state offline 00:17:18.027 03:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4118052 00:17:18.027 03:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4118052 ']' 00:17:18.027 03:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4118052 00:17:18.027 03:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4118052 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4118052' 00:17:18.027 killing process with pid 4118052 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4118052 00:17:18.027 [2024-05-15 03:11:49.042389] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.027 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4118052 00:17:18.027 [2024-05-15 03:11:49.075360] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:17:18.286 00:17:18.286 real 0m33.372s 00:17:18.286 user 1m2.588s 00:17:18.286 sys 0m4.616s 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.286 ************************************ 00:17:18.286 END TEST raid_state_function_test 00:17:18.286 ************************************ 00:17:18.286 03:11:49 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:18.286 03:11:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:18.286 03:11:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.286 03:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.286 ************************************ 00:17:18.286 START TEST raid_state_function_test_sb 00:17:18.286 ************************************ 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4124169 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4124169' 00:17:18.286 Process raid pid: 4124169 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4124169 /var/tmp/spdk-raid.sock 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4124169 ']' 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.286 03:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.286 [2024-05-15 03:11:49.426274] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:17:18.287 [2024-05-15 03:11:49.426313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.545 [2024-05-15 03:11:49.514086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.545 [2024-05-15 03:11:49.604067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.545 [2024-05-15 03:11:49.658238] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.545 [2024-05-15 03:11:49.658267] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.110 03:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.110 03:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:17:19.110 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:19.367 [2024-05-15 03:11:50.452550] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.367 [2024-05-15 03:11:50.452593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.368 [2024-05-15 03:11:50.452603] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.368 [2024-05-15 03:11:50.452611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.368 [2024-05-15 03:11:50.452619] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.368 [2024-05-15 03:11:50.452627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.368 [2024-05-15 03:11:50.452634] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.368 [2024-05-15 03:11:50.452643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.368 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.626 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.626 "name": "Existed_Raid", 00:17:19.626 "uuid": "0811593f-8f50-4b28-8c72-dbfb1ca2296c", 00:17:19.626 "strip_size_kb": 64, 00:17:19.626 "state": "configuring", 00:17:19.626 "raid_level": "concat", 00:17:19.626 "superblock": true, 00:17:19.626 "num_base_bdevs": 4, 00:17:19.626 "num_base_bdevs_discovered": 0, 00:17:19.626 "num_base_bdevs_operational": 4, 00:17:19.626 "base_bdevs_list": [ 00:17:19.626 { 00:17:19.626 "name": "BaseBdev1", 00:17:19.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.626 "is_configured": false, 00:17:19.626 "data_offset": 0, 00:17:19.626 "data_size": 0 00:17:19.626 }, 00:17:19.626 { 00:17:19.626 "name": "BaseBdev2", 00:17:19.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.626 "is_configured": false, 00:17:19.626 "data_offset": 0, 00:17:19.626 "data_size": 0 00:17:19.626 }, 00:17:19.626 { 00:17:19.626 "name": "BaseBdev3", 00:17:19.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.626 "is_configured": false, 00:17:19.626 "data_offset": 0, 00:17:19.626 "data_size": 0 00:17:19.626 }, 00:17:19.626 { 00:17:19.626 "name": "BaseBdev4", 00:17:19.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.626 "is_configured": false, 00:17:19.626 "data_offset": 0, 00:17:19.626 "data_size": 0 00:17:19.626 } 00:17:19.626 ] 00:17:19.626 }' 00:17:19.626 03:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.626 03:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.192 03:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.449 [2024-05-15 03:11:51.571368] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.449 [2024-05-15 03:11:51.571398] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2387e00 name Existed_Raid, state configuring 00:17:20.449 03:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.706 [2024-05-15 03:11:51.828087] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.706 [2024-05-15 03:11:51.828113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.706 [2024-05-15 03:11:51.828121] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.706 [2024-05-15 03:11:51.828130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.706 [2024-05-15 03:11:51.828137] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.706 [2024-05-15 03:11:51.828146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.706 [2024-05-15 03:11:51.828153] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.706 [2024-05-15 03:11:51.828162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.706 03:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.963 [2024-05-15 03:11:52.018145] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.963 BaseBdev1 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:20.964 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.221 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.479 [ 00:17:21.479 { 00:17:21.479 "name": "BaseBdev1", 00:17:21.479 "aliases": [ 00:17:21.479 "8a595c41-b61e-4a52-9559-047a7fce0800" 00:17:21.479 ], 00:17:21.479 "product_name": "Malloc disk", 00:17:21.479 "block_size": 512, 00:17:21.479 "num_blocks": 65536, 00:17:21.479 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:21.479 "assigned_rate_limits": { 00:17:21.479 "rw_ios_per_sec": 0, 00:17:21.479 "rw_mbytes_per_sec": 0, 00:17:21.479 "r_mbytes_per_sec": 0, 00:17:21.479 "w_mbytes_per_sec": 0 00:17:21.479 }, 00:17:21.479 "claimed": true, 00:17:21.479 "claim_type": "exclusive_write", 00:17:21.479 "zoned": false, 00:17:21.479 "supported_io_types": { 00:17:21.479 "read": true, 00:17:21.479 "write": true, 00:17:21.479 "unmap": true, 00:17:21.479 "write_zeroes": true, 00:17:21.479 "flush": true, 00:17:21.479 "reset": true, 00:17:21.479 "compare": false, 00:17:21.479 "compare_and_write": false, 00:17:21.479 "abort": true, 00:17:21.479 "nvme_admin": false, 00:17:21.479 "nvme_io": false 00:17:21.479 }, 00:17:21.479 "memory_domains": [ 00:17:21.479 { 00:17:21.479 "dma_device_id": "system", 00:17:21.479 "dma_device_type": 1 00:17:21.479 }, 00:17:21.479 { 00:17:21.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.479 "dma_device_type": 2 00:17:21.479 } 00:17:21.479 ], 00:17:21.479 "driver_specific": {} 00:17:21.479 } 00:17:21.479 ] 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.479 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.737 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.737 "name": "Existed_Raid", 00:17:21.737 "uuid": "72ae3bbf-3831-4e7a-8bca-2f77b1c0017d", 00:17:21.737 "strip_size_kb": 64, 00:17:21.737 "state": "configuring", 00:17:21.737 "raid_level": "concat", 00:17:21.737 "superblock": true, 00:17:21.737 "num_base_bdevs": 4, 00:17:21.737 "num_base_bdevs_discovered": 1, 00:17:21.737 "num_base_bdevs_operational": 4, 00:17:21.737 "base_bdevs_list": [ 00:17:21.737 { 00:17:21.737 "name": "BaseBdev1", 00:17:21.737 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:21.737 "is_configured": true, 00:17:21.737 "data_offset": 2048, 00:17:21.737 "data_size": 63488 00:17:21.737 }, 00:17:21.737 { 00:17:21.737 "name": "BaseBdev2", 00:17:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.737 "is_configured": false, 00:17:21.737 "data_offset": 0, 00:17:21.737 "data_size": 0 00:17:21.737 }, 00:17:21.737 { 00:17:21.737 "name": "BaseBdev3", 00:17:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.737 "is_configured": false, 00:17:21.737 "data_offset": 0, 00:17:21.737 "data_size": 0 00:17:21.737 }, 00:17:21.737 { 00:17:21.737 "name": "BaseBdev4", 00:17:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.737 "is_configured": false, 00:17:21.737 "data_offset": 0, 00:17:21.737 "data_size": 0 00:17:21.737 } 00:17:21.737 ] 00:17:21.737 }' 00:17:21.737 03:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.737 03:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.303 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.560 [2024-05-15 03:11:53.578335] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.560 [2024-05-15 03:11:53.578376] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23880a0 name Existed_Raid, state configuring 00:17:22.560 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:22.818 [2024-05-15 03:11:53.835070] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.818 [2024-05-15 03:11:53.836590] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.818 [2024-05-15 03:11:53.836621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.818 [2024-05-15 03:11:53.836629] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.818 [2024-05-15 03:11:53.836638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.819 [2024-05-15 03:11:53.836645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:22.819 [2024-05-15 03:11:53.836654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.819 03:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.077 03:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.077 "name": "Existed_Raid", 00:17:23.077 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:23.077 "strip_size_kb": 64, 00:17:23.077 "state": "configuring", 00:17:23.077 "raid_level": "concat", 00:17:23.077 "superblock": true, 00:17:23.077 "num_base_bdevs": 4, 00:17:23.077 "num_base_bdevs_discovered": 1, 00:17:23.077 "num_base_bdevs_operational": 4, 00:17:23.077 "base_bdevs_list": [ 00:17:23.077 { 00:17:23.077 "name": "BaseBdev1", 00:17:23.077 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:23.077 "is_configured": true, 00:17:23.077 "data_offset": 2048, 00:17:23.077 "data_size": 63488 00:17:23.077 }, 00:17:23.077 { 00:17:23.077 "name": "BaseBdev2", 00:17:23.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.077 "is_configured": false, 00:17:23.077 "data_offset": 0, 00:17:23.077 "data_size": 0 00:17:23.077 }, 00:17:23.077 { 00:17:23.077 "name": "BaseBdev3", 00:17:23.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.077 "is_configured": false, 00:17:23.077 "data_offset": 0, 00:17:23.077 "data_size": 0 00:17:23.077 }, 00:17:23.077 { 00:17:23.077 "name": "BaseBdev4", 00:17:23.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.077 "is_configured": false, 00:17:23.077 "data_offset": 0, 00:17:23.077 "data_size": 0 00:17:23.077 } 00:17:23.077 ] 00:17:23.077 }' 00:17:23.077 03:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.077 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.643 03:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.901 [2024-05-15 03:11:54.965258] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.901 BaseBdev2 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:23.901 03:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.159 03:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.418 [ 00:17:24.418 { 00:17:24.418 "name": "BaseBdev2", 00:17:24.418 "aliases": [ 00:17:24.418 "203036b5-b2dd-4045-aaa1-bedec90f01f9" 00:17:24.418 ], 00:17:24.418 "product_name": "Malloc disk", 00:17:24.418 "block_size": 512, 00:17:24.418 "num_blocks": 65536, 00:17:24.418 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:24.418 "assigned_rate_limits": { 00:17:24.418 "rw_ios_per_sec": 0, 00:17:24.418 "rw_mbytes_per_sec": 0, 00:17:24.418 "r_mbytes_per_sec": 0, 00:17:24.418 "w_mbytes_per_sec": 0 00:17:24.418 }, 00:17:24.418 "claimed": true, 00:17:24.418 "claim_type": "exclusive_write", 00:17:24.418 "zoned": false, 00:17:24.418 "supported_io_types": { 00:17:24.418 "read": true, 00:17:24.418 "write": true, 00:17:24.418 "unmap": true, 00:17:24.418 "write_zeroes": true, 00:17:24.418 "flush": true, 00:17:24.418 "reset": true, 00:17:24.418 "compare": false, 00:17:24.418 "compare_and_write": false, 00:17:24.418 "abort": true, 00:17:24.418 "nvme_admin": false, 00:17:24.418 "nvme_io": false 00:17:24.418 }, 00:17:24.418 "memory_domains": [ 00:17:24.418 { 00:17:24.418 "dma_device_id": "system", 00:17:24.418 "dma_device_type": 1 00:17:24.418 }, 00:17:24.418 { 00:17:24.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.418 "dma_device_type": 2 00:17:24.418 } 00:17:24.418 ], 00:17:24.418 "driver_specific": {} 00:17:24.418 } 00:17:24.418 ] 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.418 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.677 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.677 "name": "Existed_Raid", 00:17:24.677 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:24.677 "strip_size_kb": 64, 00:17:24.677 "state": "configuring", 00:17:24.677 "raid_level": "concat", 00:17:24.677 "superblock": true, 00:17:24.677 "num_base_bdevs": 4, 00:17:24.677 "num_base_bdevs_discovered": 2, 00:17:24.677 "num_base_bdevs_operational": 4, 00:17:24.677 "base_bdevs_list": [ 00:17:24.677 { 00:17:24.677 "name": "BaseBdev1", 00:17:24.677 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev2", 00:17:24.677 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev3", 00:17:24.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.677 "is_configured": false, 00:17:24.677 "data_offset": 0, 00:17:24.677 "data_size": 0 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev4", 00:17:24.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.677 "is_configured": false, 00:17:24.677 "data_offset": 0, 00:17:24.677 "data_size": 0 00:17:24.677 } 00:17:24.677 ] 00:17:24.677 }' 00:17:24.677 03:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.677 03:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.239 03:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:25.495 [2024-05-15 03:11:56.580870] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.495 BaseBdev3 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:25.495 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.752 03:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:26.009 [ 00:17:26.009 { 00:17:26.009 "name": "BaseBdev3", 00:17:26.009 "aliases": [ 00:17:26.009 "84fb0ffe-4088-4990-a48e-88428bb05169" 00:17:26.009 ], 00:17:26.009 "product_name": "Malloc disk", 00:17:26.009 "block_size": 512, 00:17:26.009 "num_blocks": 65536, 00:17:26.009 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:26.009 "assigned_rate_limits": { 00:17:26.009 "rw_ios_per_sec": 0, 00:17:26.009 "rw_mbytes_per_sec": 0, 00:17:26.009 "r_mbytes_per_sec": 0, 00:17:26.009 "w_mbytes_per_sec": 0 00:17:26.009 }, 00:17:26.009 "claimed": true, 00:17:26.009 "claim_type": "exclusive_write", 00:17:26.009 "zoned": false, 00:17:26.009 "supported_io_types": { 00:17:26.009 "read": true, 00:17:26.009 "write": true, 00:17:26.009 "unmap": true, 00:17:26.009 "write_zeroes": true, 00:17:26.009 "flush": true, 00:17:26.009 "reset": true, 00:17:26.009 "compare": false, 00:17:26.009 "compare_and_write": false, 00:17:26.009 "abort": true, 00:17:26.009 "nvme_admin": false, 00:17:26.009 "nvme_io": false 00:17:26.009 }, 00:17:26.009 "memory_domains": [ 00:17:26.009 { 00:17:26.009 "dma_device_id": "system", 00:17:26.009 "dma_device_type": 1 00:17:26.009 }, 00:17:26.009 { 00:17:26.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.009 "dma_device_type": 2 00:17:26.009 } 00:17:26.009 ], 00:17:26.009 "driver_specific": {} 00:17:26.009 } 00:17:26.009 ] 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.009 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.266 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.266 "name": "Existed_Raid", 00:17:26.266 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:26.266 "strip_size_kb": 64, 00:17:26.266 "state": "configuring", 00:17:26.266 "raid_level": "concat", 00:17:26.266 "superblock": true, 00:17:26.266 "num_base_bdevs": 4, 00:17:26.266 "num_base_bdevs_discovered": 3, 00:17:26.266 "num_base_bdevs_operational": 4, 00:17:26.266 "base_bdevs_list": [ 00:17:26.266 { 00:17:26.266 "name": "BaseBdev1", 00:17:26.266 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:26.266 "is_configured": true, 00:17:26.266 "data_offset": 2048, 00:17:26.266 "data_size": 63488 00:17:26.266 }, 00:17:26.266 { 00:17:26.266 "name": "BaseBdev2", 00:17:26.266 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:26.266 "is_configured": true, 00:17:26.266 "data_offset": 2048, 00:17:26.266 "data_size": 63488 00:17:26.266 }, 00:17:26.266 { 00:17:26.266 "name": "BaseBdev3", 00:17:26.266 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:26.266 "is_configured": true, 00:17:26.266 "data_offset": 2048, 00:17:26.266 "data_size": 63488 00:17:26.266 }, 00:17:26.266 { 00:17:26.266 "name": "BaseBdev4", 00:17:26.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.266 "is_configured": false, 00:17:26.266 "data_offset": 0, 00:17:26.266 "data_size": 0 00:17:26.266 } 00:17:26.266 ] 00:17:26.266 }' 00:17:26.266 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.266 03:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.197 03:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:27.197 [2024-05-15 03:11:58.236544] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:27.197 [2024-05-15 03:11:58.236717] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x2387670 00:17:27.197 [2024-05-15 03:11:58.236729] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:27.197 [2024-05-15 03:11:58.236933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2389a30 00:17:27.197 [2024-05-15 03:11:58.237087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2387670 00:17:27.197 [2024-05-15 03:11:58.237098] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2387670 00:17:27.197 [2024-05-15 03:11:58.237210] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.197 BaseBdev4 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:27.197 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.455 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:27.713 [ 00:17:27.713 { 00:17:27.713 "name": "BaseBdev4", 00:17:27.713 "aliases": [ 00:17:27.713 "7ec606dd-ecdc-427e-a016-cefc37ca4a14" 00:17:27.713 ], 00:17:27.713 "product_name": "Malloc disk", 00:17:27.713 "block_size": 512, 00:17:27.713 "num_blocks": 65536, 00:17:27.713 "uuid": "7ec606dd-ecdc-427e-a016-cefc37ca4a14", 00:17:27.713 "assigned_rate_limits": { 00:17:27.713 "rw_ios_per_sec": 0, 00:17:27.713 "rw_mbytes_per_sec": 0, 00:17:27.713 "r_mbytes_per_sec": 0, 00:17:27.713 "w_mbytes_per_sec": 0 00:17:27.713 }, 00:17:27.713 "claimed": true, 00:17:27.713 "claim_type": "exclusive_write", 00:17:27.713 "zoned": false, 00:17:27.713 "supported_io_types": { 00:17:27.713 "read": true, 00:17:27.713 "write": true, 00:17:27.713 "unmap": true, 00:17:27.713 "write_zeroes": true, 00:17:27.713 "flush": true, 00:17:27.713 "reset": true, 00:17:27.713 "compare": false, 00:17:27.713 "compare_and_write": false, 00:17:27.713 "abort": true, 00:17:27.713 "nvme_admin": false, 00:17:27.713 "nvme_io": false 00:17:27.713 }, 00:17:27.713 "memory_domains": [ 00:17:27.713 { 00:17:27.713 "dma_device_id": "system", 00:17:27.713 "dma_device_type": 1 00:17:27.713 }, 00:17:27.713 { 00:17:27.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.713 "dma_device_type": 2 00:17:27.713 } 00:17:27.713 ], 00:17:27.713 "driver_specific": {} 00:17:27.713 } 00:17:27.713 ] 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.713 03:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.970 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.970 "name": "Existed_Raid", 00:17:27.970 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:27.970 "strip_size_kb": 64, 00:17:27.970 "state": "online", 00:17:27.970 "raid_level": "concat", 00:17:27.970 "superblock": true, 00:17:27.970 "num_base_bdevs": 4, 00:17:27.970 "num_base_bdevs_discovered": 4, 00:17:27.970 "num_base_bdevs_operational": 4, 00:17:27.970 "base_bdevs_list": [ 00:17:27.970 { 00:17:27.970 "name": "BaseBdev1", 00:17:27.970 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:27.970 "is_configured": true, 00:17:27.970 "data_offset": 2048, 00:17:27.970 "data_size": 63488 00:17:27.970 }, 00:17:27.970 { 00:17:27.970 "name": "BaseBdev2", 00:17:27.970 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:27.970 "is_configured": true, 00:17:27.970 "data_offset": 2048, 00:17:27.970 "data_size": 63488 00:17:27.970 }, 00:17:27.970 { 00:17:27.970 "name": "BaseBdev3", 00:17:27.970 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:27.970 "is_configured": true, 00:17:27.970 "data_offset": 2048, 00:17:27.970 "data_size": 63488 00:17:27.970 }, 00:17:27.970 { 00:17:27.970 "name": "BaseBdev4", 00:17:27.970 "uuid": "7ec606dd-ecdc-427e-a016-cefc37ca4a14", 00:17:27.970 "is_configured": true, 00:17:27.970 "data_offset": 2048, 00:17:27.970 "data_size": 63488 00:17:27.970 } 00:17:27.970 ] 00:17:27.970 }' 00:17:27.970 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.970 03:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:28.533 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:28.789 [2024-05-15 03:11:59.885297] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.789 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:28.789 "name": "Existed_Raid", 00:17:28.789 "aliases": [ 00:17:28.789 "49ea8bf5-260e-40fb-b047-5c5b4bb985c4" 00:17:28.789 ], 00:17:28.789 "product_name": "Raid Volume", 00:17:28.789 "block_size": 512, 00:17:28.789 "num_blocks": 253952, 00:17:28.789 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:28.789 "assigned_rate_limits": { 00:17:28.789 "rw_ios_per_sec": 0, 00:17:28.789 "rw_mbytes_per_sec": 0, 00:17:28.789 "r_mbytes_per_sec": 0, 00:17:28.789 "w_mbytes_per_sec": 0 00:17:28.789 }, 00:17:28.789 "claimed": false, 00:17:28.789 "zoned": false, 00:17:28.789 "supported_io_types": { 00:17:28.789 "read": true, 00:17:28.789 "write": true, 00:17:28.789 "unmap": true, 00:17:28.789 "write_zeroes": true, 00:17:28.789 "flush": true, 00:17:28.789 "reset": true, 00:17:28.789 "compare": false, 00:17:28.789 "compare_and_write": false, 00:17:28.789 "abort": false, 00:17:28.789 "nvme_admin": false, 00:17:28.789 "nvme_io": false 00:17:28.789 }, 00:17:28.789 "memory_domains": [ 00:17:28.789 { 00:17:28.789 "dma_device_id": "system", 00:17:28.789 "dma_device_type": 1 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.789 "dma_device_type": 2 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "system", 00:17:28.789 "dma_device_type": 1 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.789 "dma_device_type": 2 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "system", 00:17:28.789 "dma_device_type": 1 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.789 "dma_device_type": 2 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "system", 00:17:28.789 "dma_device_type": 1 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.789 "dma_device_type": 2 00:17:28.789 } 00:17:28.789 ], 00:17:28.789 "driver_specific": { 00:17:28.789 "raid": { 00:17:28.789 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:28.789 "strip_size_kb": 64, 00:17:28.789 "state": "online", 00:17:28.789 "raid_level": "concat", 00:17:28.789 "superblock": true, 00:17:28.789 "num_base_bdevs": 4, 00:17:28.789 "num_base_bdevs_discovered": 4, 00:17:28.789 "num_base_bdevs_operational": 4, 00:17:28.789 "base_bdevs_list": [ 00:17:28.789 { 00:17:28.789 "name": "BaseBdev1", 00:17:28.789 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:28.789 "is_configured": true, 00:17:28.789 "data_offset": 2048, 00:17:28.789 "data_size": 63488 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "name": "BaseBdev2", 00:17:28.789 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:28.789 "is_configured": true, 00:17:28.789 "data_offset": 2048, 00:17:28.789 "data_size": 63488 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "name": "BaseBdev3", 00:17:28.789 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:28.789 "is_configured": true, 00:17:28.789 "data_offset": 2048, 00:17:28.789 "data_size": 63488 00:17:28.789 }, 00:17:28.789 { 00:17:28.789 "name": "BaseBdev4", 00:17:28.789 "uuid": "7ec606dd-ecdc-427e-a016-cefc37ca4a14", 00:17:28.789 "is_configured": true, 00:17:28.789 "data_offset": 2048, 00:17:28.789 "data_size": 63488 00:17:28.789 } 00:17:28.789 ] 00:17:28.789 } 00:17:28.789 } 00:17:28.789 }' 00:17:28.789 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.047 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:29.047 BaseBdev2 00:17:29.047 BaseBdev3 00:17:29.047 BaseBdev4' 00:17:29.047 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.047 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:29.047 03:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:29.305 "name": "BaseBdev1", 00:17:29.305 "aliases": [ 00:17:29.305 "8a595c41-b61e-4a52-9559-047a7fce0800" 00:17:29.305 ], 00:17:29.305 "product_name": "Malloc disk", 00:17:29.305 "block_size": 512, 00:17:29.305 "num_blocks": 65536, 00:17:29.305 "uuid": "8a595c41-b61e-4a52-9559-047a7fce0800", 00:17:29.305 "assigned_rate_limits": { 00:17:29.305 "rw_ios_per_sec": 0, 00:17:29.305 "rw_mbytes_per_sec": 0, 00:17:29.305 "r_mbytes_per_sec": 0, 00:17:29.305 "w_mbytes_per_sec": 0 00:17:29.305 }, 00:17:29.305 "claimed": true, 00:17:29.305 "claim_type": "exclusive_write", 00:17:29.305 "zoned": false, 00:17:29.305 "supported_io_types": { 00:17:29.305 "read": true, 00:17:29.305 "write": true, 00:17:29.305 "unmap": true, 00:17:29.305 "write_zeroes": true, 00:17:29.305 "flush": true, 00:17:29.305 "reset": true, 00:17:29.305 "compare": false, 00:17:29.305 "compare_and_write": false, 00:17:29.305 "abort": true, 00:17:29.305 "nvme_admin": false, 00:17:29.305 "nvme_io": false 00:17:29.305 }, 00:17:29.305 "memory_domains": [ 00:17:29.305 { 00:17:29.305 "dma_device_id": "system", 00:17:29.305 "dma_device_type": 1 00:17:29.305 }, 00:17:29.305 { 00:17:29.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.305 "dma_device_type": 2 00:17:29.305 } 00:17:29.305 ], 00:17:29.305 "driver_specific": {} 00:17:29.305 }' 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.305 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:29.566 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:29.858 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:29.858 "name": "BaseBdev2", 00:17:29.858 "aliases": [ 00:17:29.858 "203036b5-b2dd-4045-aaa1-bedec90f01f9" 00:17:29.858 ], 00:17:29.858 "product_name": "Malloc disk", 00:17:29.858 "block_size": 512, 00:17:29.858 "num_blocks": 65536, 00:17:29.858 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:29.858 "assigned_rate_limits": { 00:17:29.858 "rw_ios_per_sec": 0, 00:17:29.858 "rw_mbytes_per_sec": 0, 00:17:29.858 "r_mbytes_per_sec": 0, 00:17:29.858 "w_mbytes_per_sec": 0 00:17:29.858 }, 00:17:29.858 "claimed": true, 00:17:29.858 "claim_type": "exclusive_write", 00:17:29.858 "zoned": false, 00:17:29.858 "supported_io_types": { 00:17:29.859 "read": true, 00:17:29.859 "write": true, 00:17:29.859 "unmap": true, 00:17:29.859 "write_zeroes": true, 00:17:29.859 "flush": true, 00:17:29.859 "reset": true, 00:17:29.859 "compare": false, 00:17:29.859 "compare_and_write": false, 00:17:29.859 "abort": true, 00:17:29.859 "nvme_admin": false, 00:17:29.859 "nvme_io": false 00:17:29.859 }, 00:17:29.859 "memory_domains": [ 00:17:29.859 { 00:17:29.859 "dma_device_id": "system", 00:17:29.859 "dma_device_type": 1 00:17:29.859 }, 00:17:29.859 { 00:17:29.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.859 "dma_device_type": 2 00:17:29.859 } 00:17:29.859 ], 00:17:29.859 "driver_specific": {} 00:17:29.859 }' 00:17:29.859 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.859 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.859 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:29.859 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:30.121 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:30.378 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:30.378 "name": "BaseBdev3", 00:17:30.378 "aliases": [ 00:17:30.378 "84fb0ffe-4088-4990-a48e-88428bb05169" 00:17:30.378 ], 00:17:30.378 "product_name": "Malloc disk", 00:17:30.378 "block_size": 512, 00:17:30.378 "num_blocks": 65536, 00:17:30.378 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:30.378 "assigned_rate_limits": { 00:17:30.378 "rw_ios_per_sec": 0, 00:17:30.378 "rw_mbytes_per_sec": 0, 00:17:30.378 "r_mbytes_per_sec": 0, 00:17:30.378 "w_mbytes_per_sec": 0 00:17:30.378 }, 00:17:30.378 "claimed": true, 00:17:30.378 "claim_type": "exclusive_write", 00:17:30.378 "zoned": false, 00:17:30.378 "supported_io_types": { 00:17:30.378 "read": true, 00:17:30.378 "write": true, 00:17:30.378 "unmap": true, 00:17:30.378 "write_zeroes": true, 00:17:30.378 "flush": true, 00:17:30.378 "reset": true, 00:17:30.378 "compare": false, 00:17:30.378 "compare_and_write": false, 00:17:30.378 "abort": true, 00:17:30.378 "nvme_admin": false, 00:17:30.378 "nvme_io": false 00:17:30.378 }, 00:17:30.378 "memory_domains": [ 00:17:30.378 { 00:17:30.378 "dma_device_id": "system", 00:17:30.378 "dma_device_type": 1 00:17:30.378 }, 00:17:30.378 { 00:17:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.378 "dma_device_type": 2 00:17:30.378 } 00:17:30.378 ], 00:17:30.378 "driver_specific": {} 00:17:30.378 }' 00:17:30.378 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.635 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:30.892 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:31.151 "name": "BaseBdev4", 00:17:31.151 "aliases": [ 00:17:31.151 "7ec606dd-ecdc-427e-a016-cefc37ca4a14" 00:17:31.151 ], 00:17:31.151 "product_name": "Malloc disk", 00:17:31.151 "block_size": 512, 00:17:31.151 "num_blocks": 65536, 00:17:31.151 "uuid": "7ec606dd-ecdc-427e-a016-cefc37ca4a14", 00:17:31.151 "assigned_rate_limits": { 00:17:31.151 "rw_ios_per_sec": 0, 00:17:31.151 "rw_mbytes_per_sec": 0, 00:17:31.151 "r_mbytes_per_sec": 0, 00:17:31.151 "w_mbytes_per_sec": 0 00:17:31.151 }, 00:17:31.151 "claimed": true, 00:17:31.151 "claim_type": "exclusive_write", 00:17:31.151 "zoned": false, 00:17:31.151 "supported_io_types": { 00:17:31.151 "read": true, 00:17:31.151 "write": true, 00:17:31.151 "unmap": true, 00:17:31.151 "write_zeroes": true, 00:17:31.151 "flush": true, 00:17:31.151 "reset": true, 00:17:31.151 "compare": false, 00:17:31.151 "compare_and_write": false, 00:17:31.151 "abort": true, 00:17:31.151 "nvme_admin": false, 00:17:31.151 "nvme_io": false 00:17:31.151 }, 00:17:31.151 "memory_domains": [ 00:17:31.151 { 00:17:31.151 "dma_device_id": "system", 00:17:31.151 "dma_device_type": 1 00:17:31.151 }, 00:17:31.151 { 00:17:31.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.151 "dma_device_type": 2 00:17:31.151 } 00:17:31.151 ], 00:17:31.151 "driver_specific": {} 00:17:31.151 }' 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.151 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:31.409 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:31.668 [2024-05-15 03:12:02.740661] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.668 [2024-05-15 03:12:02.740690] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.668 [2024-05-15 03:12:02.740740] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.668 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.928 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.928 "name": "Existed_Raid", 00:17:31.928 "uuid": "49ea8bf5-260e-40fb-b047-5c5b4bb985c4", 00:17:31.928 "strip_size_kb": 64, 00:17:31.928 "state": "offline", 00:17:31.928 "raid_level": "concat", 00:17:31.928 "superblock": true, 00:17:31.928 "num_base_bdevs": 4, 00:17:31.928 "num_base_bdevs_discovered": 3, 00:17:31.928 "num_base_bdevs_operational": 3, 00:17:31.928 "base_bdevs_list": [ 00:17:31.928 { 00:17:31.928 "name": null, 00:17:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.928 "is_configured": false, 00:17:31.928 "data_offset": 2048, 00:17:31.928 "data_size": 63488 00:17:31.928 }, 00:17:31.928 { 00:17:31.928 "name": "BaseBdev2", 00:17:31.928 "uuid": "203036b5-b2dd-4045-aaa1-bedec90f01f9", 00:17:31.928 "is_configured": true, 00:17:31.928 "data_offset": 2048, 00:17:31.928 "data_size": 63488 00:17:31.928 }, 00:17:31.928 { 00:17:31.928 "name": "BaseBdev3", 00:17:31.928 "uuid": "84fb0ffe-4088-4990-a48e-88428bb05169", 00:17:31.928 "is_configured": true, 00:17:31.928 "data_offset": 2048, 00:17:31.928 "data_size": 63488 00:17:31.928 }, 00:17:31.928 { 00:17:31.929 "name": "BaseBdev4", 00:17:31.929 "uuid": "7ec606dd-ecdc-427e-a016-cefc37ca4a14", 00:17:31.929 "is_configured": true, 00:17:31.929 "data_offset": 2048, 00:17:31.929 "data_size": 63488 00:17:31.929 } 00:17:31.929 ] 00:17:31.929 }' 00:17:31.929 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.929 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.861 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.119 [2024-05-15 03:12:04.173715] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.119 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:33.119 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:33.119 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.119 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:33.378 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:33.378 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.378 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:33.636 [2024-05-15 03:12:04.697546] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:33.636 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:33.636 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:33.636 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.636 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:33.894 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:33.894 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.894 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:34.153 [2024-05-15 03:12:05.225213] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.153 [2024-05-15 03:12:05.225250] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2387670 name Existed_Raid, state offline 00:17:34.153 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.153 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.153 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.153 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:34.411 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.668 BaseBdev2 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:34.668 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.926 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.184 [ 00:17:35.184 { 00:17:35.184 "name": "BaseBdev2", 00:17:35.184 "aliases": [ 00:17:35.184 "ed8efa7b-75d3-431d-9265-af80853d156d" 00:17:35.184 ], 00:17:35.184 "product_name": "Malloc disk", 00:17:35.184 "block_size": 512, 00:17:35.184 "num_blocks": 65536, 00:17:35.184 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:35.184 "assigned_rate_limits": { 00:17:35.184 "rw_ios_per_sec": 0, 00:17:35.184 "rw_mbytes_per_sec": 0, 00:17:35.184 "r_mbytes_per_sec": 0, 00:17:35.184 "w_mbytes_per_sec": 0 00:17:35.184 }, 00:17:35.184 "claimed": false, 00:17:35.184 "zoned": false, 00:17:35.184 "supported_io_types": { 00:17:35.184 "read": true, 00:17:35.184 "write": true, 00:17:35.184 "unmap": true, 00:17:35.184 "write_zeroes": true, 00:17:35.184 "flush": true, 00:17:35.184 "reset": true, 00:17:35.184 "compare": false, 00:17:35.184 "compare_and_write": false, 00:17:35.184 "abort": true, 00:17:35.184 "nvme_admin": false, 00:17:35.184 "nvme_io": false 00:17:35.184 }, 00:17:35.184 "memory_domains": [ 00:17:35.184 { 00:17:35.184 "dma_device_id": "system", 00:17:35.184 "dma_device_type": 1 00:17:35.184 }, 00:17:35.184 { 00:17:35.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.184 "dma_device_type": 2 00:17:35.184 } 00:17:35.184 ], 00:17:35.184 "driver_specific": {} 00:17:35.184 } 00:17:35.184 ] 00:17:35.184 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:35.184 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:35.184 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:35.184 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:35.444 BaseBdev3 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:35.444 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.701 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:35.959 [ 00:17:35.959 { 00:17:35.959 "name": "BaseBdev3", 00:17:35.959 "aliases": [ 00:17:35.959 "5943876c-263a-4587-a22b-ebd2cff99067" 00:17:35.959 ], 00:17:35.959 "product_name": "Malloc disk", 00:17:35.959 "block_size": 512, 00:17:35.959 "num_blocks": 65536, 00:17:35.959 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:35.959 "assigned_rate_limits": { 00:17:35.959 "rw_ios_per_sec": 0, 00:17:35.959 "rw_mbytes_per_sec": 0, 00:17:35.959 "r_mbytes_per_sec": 0, 00:17:35.959 "w_mbytes_per_sec": 0 00:17:35.959 }, 00:17:35.959 "claimed": false, 00:17:35.959 "zoned": false, 00:17:35.959 "supported_io_types": { 00:17:35.959 "read": true, 00:17:35.959 "write": true, 00:17:35.959 "unmap": true, 00:17:35.959 "write_zeroes": true, 00:17:35.959 "flush": true, 00:17:35.959 "reset": true, 00:17:35.959 "compare": false, 00:17:35.959 "compare_and_write": false, 00:17:35.959 "abort": true, 00:17:35.959 "nvme_admin": false, 00:17:35.959 "nvme_io": false 00:17:35.959 }, 00:17:35.959 "memory_domains": [ 00:17:35.959 { 00:17:35.959 "dma_device_id": "system", 00:17:35.959 "dma_device_type": 1 00:17:35.959 }, 00:17:35.959 { 00:17:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.959 "dma_device_type": 2 00:17:35.959 } 00:17:35.959 ], 00:17:35.959 "driver_specific": {} 00:17:35.959 } 00:17:35.959 ] 00:17:35.959 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:35.959 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:35.959 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:35.959 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:36.216 BaseBdev4 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:36.216 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.599 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:36.856 [ 00:17:36.856 { 00:17:36.856 "name": "BaseBdev4", 00:17:36.856 "aliases": [ 00:17:36.856 "a602f821-c561-480d-9450-06652b0252ee" 00:17:36.856 ], 00:17:36.856 "product_name": "Malloc disk", 00:17:36.856 "block_size": 512, 00:17:36.856 "num_blocks": 65536, 00:17:36.856 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:36.856 "assigned_rate_limits": { 00:17:36.856 "rw_ios_per_sec": 0, 00:17:36.856 "rw_mbytes_per_sec": 0, 00:17:36.856 "r_mbytes_per_sec": 0, 00:17:36.856 "w_mbytes_per_sec": 0 00:17:36.856 }, 00:17:36.856 "claimed": false, 00:17:36.856 "zoned": false, 00:17:36.856 "supported_io_types": { 00:17:36.856 "read": true, 00:17:36.856 "write": true, 00:17:36.856 "unmap": true, 00:17:36.856 "write_zeroes": true, 00:17:36.856 "flush": true, 00:17:36.856 "reset": true, 00:17:36.856 "compare": false, 00:17:36.856 "compare_and_write": false, 00:17:36.856 "abort": true, 00:17:36.856 "nvme_admin": false, 00:17:36.856 "nvme_io": false 00:17:36.856 }, 00:17:36.856 "memory_domains": [ 00:17:36.856 { 00:17:36.856 "dma_device_id": "system", 00:17:36.856 "dma_device_type": 1 00:17:36.856 }, 00:17:36.856 { 00:17:36.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.856 "dma_device_type": 2 00:17:36.857 } 00:17:36.857 ], 00:17:36.857 "driver_specific": {} 00:17:36.857 } 00:17:36.857 ] 00:17:36.857 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:36.857 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:36.857 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:36.857 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:37.115 [2024-05-15 03:12:08.031116] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.115 [2024-05-15 03:12:08.031154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.115 [2024-05-15 03:12:08.031172] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.115 [2024-05-15 03:12:08.032758] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.115 [2024-05-15 03:12:08.032801] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.115 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.374 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.374 "name": "Existed_Raid", 00:17:37.374 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:37.374 "strip_size_kb": 64, 00:17:37.374 "state": "configuring", 00:17:37.374 "raid_level": "concat", 00:17:37.374 "superblock": true, 00:17:37.374 "num_base_bdevs": 4, 00:17:37.374 "num_base_bdevs_discovered": 3, 00:17:37.374 "num_base_bdevs_operational": 4, 00:17:37.374 "base_bdevs_list": [ 00:17:37.374 { 00:17:37.374 "name": "BaseBdev1", 00:17:37.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.374 "is_configured": false, 00:17:37.374 "data_offset": 0, 00:17:37.374 "data_size": 0 00:17:37.374 }, 00:17:37.374 { 00:17:37.374 "name": "BaseBdev2", 00:17:37.374 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:37.374 "is_configured": true, 00:17:37.374 "data_offset": 2048, 00:17:37.374 "data_size": 63488 00:17:37.374 }, 00:17:37.374 { 00:17:37.374 "name": "BaseBdev3", 00:17:37.374 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:37.374 "is_configured": true, 00:17:37.374 "data_offset": 2048, 00:17:37.374 "data_size": 63488 00:17:37.374 }, 00:17:37.374 { 00:17:37.374 "name": "BaseBdev4", 00:17:37.374 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:37.374 "is_configured": true, 00:17:37.374 "data_offset": 2048, 00:17:37.374 "data_size": 63488 00:17:37.374 } 00:17:37.374 ] 00:17:37.374 }' 00:17:37.374 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.374 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.940 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:38.198 [2024-05-15 03:12:09.154081] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.198 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.456 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.456 "name": "Existed_Raid", 00:17:38.456 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:38.456 "strip_size_kb": 64, 00:17:38.456 "state": "configuring", 00:17:38.456 "raid_level": "concat", 00:17:38.456 "superblock": true, 00:17:38.456 "num_base_bdevs": 4, 00:17:38.456 "num_base_bdevs_discovered": 2, 00:17:38.456 "num_base_bdevs_operational": 4, 00:17:38.456 "base_bdevs_list": [ 00:17:38.456 { 00:17:38.456 "name": "BaseBdev1", 00:17:38.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.456 "is_configured": false, 00:17:38.456 "data_offset": 0, 00:17:38.456 "data_size": 0 00:17:38.456 }, 00:17:38.456 { 00:17:38.456 "name": null, 00:17:38.456 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:38.456 "is_configured": false, 00:17:38.456 "data_offset": 2048, 00:17:38.456 "data_size": 63488 00:17:38.456 }, 00:17:38.456 { 00:17:38.456 "name": "BaseBdev3", 00:17:38.456 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:38.456 "is_configured": true, 00:17:38.456 "data_offset": 2048, 00:17:38.456 "data_size": 63488 00:17:38.456 }, 00:17:38.456 { 00:17:38.456 "name": "BaseBdev4", 00:17:38.456 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:38.456 "is_configured": true, 00:17:38.456 "data_offset": 2048, 00:17:38.456 "data_size": 63488 00:17:38.456 } 00:17:38.456 ] 00:17:38.456 }' 00:17:38.456 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.456 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.021 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.021 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.279 [2024-05-15 03:12:10.402138] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.279 BaseBdev1 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:39.279 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.537 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:39.794 [ 00:17:39.794 { 00:17:39.794 "name": "BaseBdev1", 00:17:39.795 "aliases": [ 00:17:39.795 "5aa1d2f9-c7e3-4ebb-b582-857a1c146549" 00:17:39.795 ], 00:17:39.795 "product_name": "Malloc disk", 00:17:39.795 "block_size": 512, 00:17:39.795 "num_blocks": 65536, 00:17:39.795 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:39.795 "assigned_rate_limits": { 00:17:39.795 "rw_ios_per_sec": 0, 00:17:39.795 "rw_mbytes_per_sec": 0, 00:17:39.795 "r_mbytes_per_sec": 0, 00:17:39.795 "w_mbytes_per_sec": 0 00:17:39.795 }, 00:17:39.795 "claimed": true, 00:17:39.795 "claim_type": "exclusive_write", 00:17:39.795 "zoned": false, 00:17:39.795 "supported_io_types": { 00:17:39.795 "read": true, 00:17:39.795 "write": true, 00:17:39.795 "unmap": true, 00:17:39.795 "write_zeroes": true, 00:17:39.795 "flush": true, 00:17:39.795 "reset": true, 00:17:39.795 "compare": false, 00:17:39.795 "compare_and_write": false, 00:17:39.795 "abort": true, 00:17:39.795 "nvme_admin": false, 00:17:39.795 "nvme_io": false 00:17:39.795 }, 00:17:39.795 "memory_domains": [ 00:17:39.795 { 00:17:39.795 "dma_device_id": "system", 00:17:39.795 "dma_device_type": 1 00:17:39.795 }, 00:17:39.795 { 00:17:39.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.795 "dma_device_type": 2 00:17:39.795 } 00:17:39.795 ], 00:17:39.795 "driver_specific": {} 00:17:39.795 } 00:17:39.795 ] 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.795 "name": "Existed_Raid", 00:17:39.795 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:39.795 "strip_size_kb": 64, 00:17:39.795 "state": "configuring", 00:17:39.795 "raid_level": "concat", 00:17:39.795 "superblock": true, 00:17:39.795 "num_base_bdevs": 4, 00:17:39.795 "num_base_bdevs_discovered": 3, 00:17:39.795 "num_base_bdevs_operational": 4, 00:17:39.795 "base_bdevs_list": [ 00:17:39.795 { 00:17:39.795 "name": "BaseBdev1", 00:17:39.795 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:39.795 "is_configured": true, 00:17:39.795 "data_offset": 2048, 00:17:39.795 "data_size": 63488 00:17:39.795 }, 00:17:39.795 { 00:17:39.795 "name": null, 00:17:39.795 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:39.795 "is_configured": false, 00:17:39.795 "data_offset": 2048, 00:17:39.795 "data_size": 63488 00:17:39.795 }, 00:17:39.795 { 00:17:39.795 "name": "BaseBdev3", 00:17:39.795 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:39.795 "is_configured": true, 00:17:39.795 "data_offset": 2048, 00:17:39.795 "data_size": 63488 00:17:39.795 }, 00:17:39.795 { 00:17:39.795 "name": "BaseBdev4", 00:17:39.795 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:39.795 "is_configured": true, 00:17:39.795 "data_offset": 2048, 00:17:39.795 "data_size": 63488 00:17:39.795 } 00:17:39.795 ] 00:17:39.795 }' 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.795 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.360 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.360 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:40.618 [2024-05-15 03:12:11.741753] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.618 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.876 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.876 "name": "Existed_Raid", 00:17:40.876 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:40.876 "strip_size_kb": 64, 00:17:40.876 "state": "configuring", 00:17:40.876 "raid_level": "concat", 00:17:40.876 "superblock": true, 00:17:40.876 "num_base_bdevs": 4, 00:17:40.876 "num_base_bdevs_discovered": 2, 00:17:40.876 "num_base_bdevs_operational": 4, 00:17:40.876 "base_bdevs_list": [ 00:17:40.876 { 00:17:40.876 "name": "BaseBdev1", 00:17:40.876 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:40.876 "is_configured": true, 00:17:40.876 "data_offset": 2048, 00:17:40.876 "data_size": 63488 00:17:40.876 }, 00:17:40.876 { 00:17:40.876 "name": null, 00:17:40.876 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:40.876 "is_configured": false, 00:17:40.876 "data_offset": 2048, 00:17:40.876 "data_size": 63488 00:17:40.876 }, 00:17:40.876 { 00:17:40.876 "name": null, 00:17:40.876 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:40.876 "is_configured": false, 00:17:40.876 "data_offset": 2048, 00:17:40.876 "data_size": 63488 00:17:40.876 }, 00:17:40.876 { 00:17:40.876 "name": "BaseBdev4", 00:17:40.876 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:40.876 "is_configured": true, 00:17:40.876 "data_offset": 2048, 00:17:40.876 "data_size": 63488 00:17:40.876 } 00:17:40.876 ] 00:17:40.876 }' 00:17:40.876 03:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.876 03:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.442 03:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.442 03:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.700 03:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:17:41.700 03:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:41.957 [2024-05-15 03:12:13.049261] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.957 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.214 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.214 "name": "Existed_Raid", 00:17:42.214 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:42.214 "strip_size_kb": 64, 00:17:42.214 "state": "configuring", 00:17:42.214 "raid_level": "concat", 00:17:42.214 "superblock": true, 00:17:42.214 "num_base_bdevs": 4, 00:17:42.214 "num_base_bdevs_discovered": 3, 00:17:42.214 "num_base_bdevs_operational": 4, 00:17:42.214 "base_bdevs_list": [ 00:17:42.214 { 00:17:42.214 "name": "BaseBdev1", 00:17:42.214 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:42.214 "is_configured": true, 00:17:42.214 "data_offset": 2048, 00:17:42.214 "data_size": 63488 00:17:42.214 }, 00:17:42.214 { 00:17:42.214 "name": null, 00:17:42.214 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:42.214 "is_configured": false, 00:17:42.214 "data_offset": 2048, 00:17:42.214 "data_size": 63488 00:17:42.214 }, 00:17:42.214 { 00:17:42.214 "name": "BaseBdev3", 00:17:42.214 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:42.214 "is_configured": true, 00:17:42.214 "data_offset": 2048, 00:17:42.214 "data_size": 63488 00:17:42.214 }, 00:17:42.214 { 00:17:42.214 "name": "BaseBdev4", 00:17:42.214 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:42.214 "is_configured": true, 00:17:42.214 "data_offset": 2048, 00:17:42.214 "data_size": 63488 00:17:42.214 } 00:17:42.214 ] 00:17:42.214 }' 00:17:42.214 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.214 03:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.780 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.780 03:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:43.038 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:17:43.038 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:43.298 [2024-05-15 03:12:14.292602] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.298 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.574 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.574 "name": "Existed_Raid", 00:17:43.574 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:43.574 "strip_size_kb": 64, 00:17:43.574 "state": "configuring", 00:17:43.574 "raid_level": "concat", 00:17:43.574 "superblock": true, 00:17:43.574 "num_base_bdevs": 4, 00:17:43.574 "num_base_bdevs_discovered": 2, 00:17:43.574 "num_base_bdevs_operational": 4, 00:17:43.574 "base_bdevs_list": [ 00:17:43.574 { 00:17:43.574 "name": null, 00:17:43.574 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:43.574 "is_configured": false, 00:17:43.574 "data_offset": 2048, 00:17:43.574 "data_size": 63488 00:17:43.574 }, 00:17:43.574 { 00:17:43.574 "name": null, 00:17:43.574 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:43.574 "is_configured": false, 00:17:43.574 "data_offset": 2048, 00:17:43.574 "data_size": 63488 00:17:43.574 }, 00:17:43.574 { 00:17:43.574 "name": "BaseBdev3", 00:17:43.574 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:43.574 "is_configured": true, 00:17:43.574 "data_offset": 2048, 00:17:43.574 "data_size": 63488 00:17:43.574 }, 00:17:43.574 { 00:17:43.574 "name": "BaseBdev4", 00:17:43.574 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:43.574 "is_configured": true, 00:17:43.574 "data_offset": 2048, 00:17:43.574 "data_size": 63488 00:17:43.574 } 00:17:43.574 ] 00:17:43.574 }' 00:17:43.574 03:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.574 03:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.154 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.154 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:44.154 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:17:44.154 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:44.412 [2024-05-15 03:12:15.442356] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.412 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.670 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.670 "name": "Existed_Raid", 00:17:44.670 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:44.670 "strip_size_kb": 64, 00:17:44.670 "state": "configuring", 00:17:44.670 "raid_level": "concat", 00:17:44.670 "superblock": true, 00:17:44.670 "num_base_bdevs": 4, 00:17:44.670 "num_base_bdevs_discovered": 3, 00:17:44.670 "num_base_bdevs_operational": 4, 00:17:44.670 "base_bdevs_list": [ 00:17:44.670 { 00:17:44.670 "name": null, 00:17:44.670 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:44.670 "is_configured": false, 00:17:44.670 "data_offset": 2048, 00:17:44.670 "data_size": 63488 00:17:44.670 }, 00:17:44.670 { 00:17:44.670 "name": "BaseBdev2", 00:17:44.670 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:44.670 "is_configured": true, 00:17:44.670 "data_offset": 2048, 00:17:44.670 "data_size": 63488 00:17:44.670 }, 00:17:44.670 { 00:17:44.670 "name": "BaseBdev3", 00:17:44.670 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:44.670 "is_configured": true, 00:17:44.670 "data_offset": 2048, 00:17:44.670 "data_size": 63488 00:17:44.670 }, 00:17:44.670 { 00:17:44.670 "name": "BaseBdev4", 00:17:44.670 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:44.670 "is_configured": true, 00:17:44.670 "data_offset": 2048, 00:17:44.670 "data_size": 63488 00:17:44.670 } 00:17:44.670 ] 00:17:44.670 }' 00:17:44.670 03:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.670 03:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.236 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.236 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:45.493 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:17:45.493 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.493 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:45.751 03:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5aa1d2f9-c7e3-4ebb-b582-857a1c146549 00:17:46.008 [2024-05-15 03:12:17.095221] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:46.008 [2024-05-15 03:12:17.095381] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x2537ab0 00:17:46.008 [2024-05-15 03:12:17.095393] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:46.009 [2024-05-15 03:12:17.095580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x252dbc0 00:17:46.009 [2024-05-15 03:12:17.095719] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2537ab0 00:17:46.009 [2024-05-15 03:12:17.095728] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2537ab0 00:17:46.009 [2024-05-15 03:12:17.095826] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.009 NewBaseBdev 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:46.009 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.267 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:46.267 [ 00:17:46.267 { 00:17:46.267 "name": "NewBaseBdev", 00:17:46.267 "aliases": [ 00:17:46.267 "5aa1d2f9-c7e3-4ebb-b582-857a1c146549" 00:17:46.267 ], 00:17:46.267 "product_name": "Malloc disk", 00:17:46.267 "block_size": 512, 00:17:46.267 "num_blocks": 65536, 00:17:46.267 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:46.267 "assigned_rate_limits": { 00:17:46.267 "rw_ios_per_sec": 0, 00:17:46.267 "rw_mbytes_per_sec": 0, 00:17:46.267 "r_mbytes_per_sec": 0, 00:17:46.267 "w_mbytes_per_sec": 0 00:17:46.267 }, 00:17:46.267 "claimed": true, 00:17:46.267 "claim_type": "exclusive_write", 00:17:46.267 "zoned": false, 00:17:46.267 "supported_io_types": { 00:17:46.267 "read": true, 00:17:46.267 "write": true, 00:17:46.267 "unmap": true, 00:17:46.267 "write_zeroes": true, 00:17:46.267 "flush": true, 00:17:46.267 "reset": true, 00:17:46.267 "compare": false, 00:17:46.267 "compare_and_write": false, 00:17:46.267 "abort": true, 00:17:46.267 "nvme_admin": false, 00:17:46.267 "nvme_io": false 00:17:46.267 }, 00:17:46.267 "memory_domains": [ 00:17:46.267 { 00:17:46.267 "dma_device_id": "system", 00:17:46.267 "dma_device_type": 1 00:17:46.267 }, 00:17:46.267 { 00:17:46.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.267 "dma_device_type": 2 00:17:46.267 } 00:17:46.267 ], 00:17:46.267 "driver_specific": {} 00:17:46.267 } 00:17:46.267 ] 00:17:46.267 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.523 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.524 "name": "Existed_Raid", 00:17:46.524 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:46.524 "strip_size_kb": 64, 00:17:46.524 "state": "online", 00:17:46.524 "raid_level": "concat", 00:17:46.524 "superblock": true, 00:17:46.524 "num_base_bdevs": 4, 00:17:46.524 "num_base_bdevs_discovered": 4, 00:17:46.524 "num_base_bdevs_operational": 4, 00:17:46.524 "base_bdevs_list": [ 00:17:46.524 { 00:17:46.524 "name": "NewBaseBdev", 00:17:46.524 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:46.524 "is_configured": true, 00:17:46.524 "data_offset": 2048, 00:17:46.524 "data_size": 63488 00:17:46.524 }, 00:17:46.524 { 00:17:46.524 "name": "BaseBdev2", 00:17:46.524 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:46.524 "is_configured": true, 00:17:46.524 "data_offset": 2048, 00:17:46.524 "data_size": 63488 00:17:46.524 }, 00:17:46.524 { 00:17:46.524 "name": "BaseBdev3", 00:17:46.524 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:46.524 "is_configured": true, 00:17:46.524 "data_offset": 2048, 00:17:46.524 "data_size": 63488 00:17:46.524 }, 00:17:46.524 { 00:17:46.524 "name": "BaseBdev4", 00:17:46.524 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:46.524 "is_configured": true, 00:17:46.524 "data_offset": 2048, 00:17:46.524 "data_size": 63488 00:17:46.524 } 00:17:46.524 ] 00:17:46.524 }' 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.524 03:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:47.456 [2024-05-15 03:12:18.515357] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.456 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:47.456 "name": "Existed_Raid", 00:17:47.456 "aliases": [ 00:17:47.456 "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd" 00:17:47.456 ], 00:17:47.456 "product_name": "Raid Volume", 00:17:47.456 "block_size": 512, 00:17:47.456 "num_blocks": 253952, 00:17:47.456 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:47.456 "assigned_rate_limits": { 00:17:47.456 "rw_ios_per_sec": 0, 00:17:47.456 "rw_mbytes_per_sec": 0, 00:17:47.456 "r_mbytes_per_sec": 0, 00:17:47.456 "w_mbytes_per_sec": 0 00:17:47.456 }, 00:17:47.456 "claimed": false, 00:17:47.456 "zoned": false, 00:17:47.456 "supported_io_types": { 00:17:47.456 "read": true, 00:17:47.456 "write": true, 00:17:47.456 "unmap": true, 00:17:47.456 "write_zeroes": true, 00:17:47.456 "flush": true, 00:17:47.456 "reset": true, 00:17:47.456 "compare": false, 00:17:47.456 "compare_and_write": false, 00:17:47.456 "abort": false, 00:17:47.456 "nvme_admin": false, 00:17:47.456 "nvme_io": false 00:17:47.456 }, 00:17:47.456 "memory_domains": [ 00:17:47.456 { 00:17:47.456 "dma_device_id": "system", 00:17:47.456 "dma_device_type": 1 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.456 "dma_device_type": 2 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "system", 00:17:47.456 "dma_device_type": 1 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.456 "dma_device_type": 2 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "system", 00:17:47.456 "dma_device_type": 1 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.456 "dma_device_type": 2 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "system", 00:17:47.456 "dma_device_type": 1 00:17:47.456 }, 00:17:47.456 { 00:17:47.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.456 "dma_device_type": 2 00:17:47.456 } 00:17:47.457 ], 00:17:47.457 "driver_specific": { 00:17:47.457 "raid": { 00:17:47.457 "uuid": "fe4f9bed-0cb7-447b-8d95-c83e16efb2cd", 00:17:47.457 "strip_size_kb": 64, 00:17:47.457 "state": "online", 00:17:47.457 "raid_level": "concat", 00:17:47.457 "superblock": true, 00:17:47.457 "num_base_bdevs": 4, 00:17:47.457 "num_base_bdevs_discovered": 4, 00:17:47.457 "num_base_bdevs_operational": 4, 00:17:47.457 "base_bdevs_list": [ 00:17:47.457 { 00:17:47.457 "name": "NewBaseBdev", 00:17:47.457 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:47.457 "is_configured": true, 00:17:47.457 "data_offset": 2048, 00:17:47.457 "data_size": 63488 00:17:47.457 }, 00:17:47.457 { 00:17:47.457 "name": "BaseBdev2", 00:17:47.457 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:47.457 "is_configured": true, 00:17:47.457 "data_offset": 2048, 00:17:47.457 "data_size": 63488 00:17:47.457 }, 00:17:47.457 { 00:17:47.457 "name": "BaseBdev3", 00:17:47.457 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:47.457 "is_configured": true, 00:17:47.457 "data_offset": 2048, 00:17:47.457 "data_size": 63488 00:17:47.457 }, 00:17:47.457 { 00:17:47.457 "name": "BaseBdev4", 00:17:47.457 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:47.457 "is_configured": true, 00:17:47.457 "data_offset": 2048, 00:17:47.457 "data_size": 63488 00:17:47.457 } 00:17:47.457 ] 00:17:47.457 } 00:17:47.457 } 00:17:47.457 }' 00:17:47.457 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.457 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:17:47.457 BaseBdev2 00:17:47.457 BaseBdev3 00:17:47.457 BaseBdev4' 00:17:47.457 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:47.457 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:47.457 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:47.715 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:47.715 "name": "NewBaseBdev", 00:17:47.715 "aliases": [ 00:17:47.715 "5aa1d2f9-c7e3-4ebb-b582-857a1c146549" 00:17:47.715 ], 00:17:47.715 "product_name": "Malloc disk", 00:17:47.715 "block_size": 512, 00:17:47.715 "num_blocks": 65536, 00:17:47.715 "uuid": "5aa1d2f9-c7e3-4ebb-b582-857a1c146549", 00:17:47.715 "assigned_rate_limits": { 00:17:47.715 "rw_ios_per_sec": 0, 00:17:47.715 "rw_mbytes_per_sec": 0, 00:17:47.715 "r_mbytes_per_sec": 0, 00:17:47.715 "w_mbytes_per_sec": 0 00:17:47.715 }, 00:17:47.715 "claimed": true, 00:17:47.715 "claim_type": "exclusive_write", 00:17:47.715 "zoned": false, 00:17:47.715 "supported_io_types": { 00:17:47.715 "read": true, 00:17:47.715 "write": true, 00:17:47.715 "unmap": true, 00:17:47.715 "write_zeroes": true, 00:17:47.715 "flush": true, 00:17:47.715 "reset": true, 00:17:47.715 "compare": false, 00:17:47.715 "compare_and_write": false, 00:17:47.715 "abort": true, 00:17:47.715 "nvme_admin": false, 00:17:47.715 "nvme_io": false 00:17:47.715 }, 00:17:47.715 "memory_domains": [ 00:17:47.715 { 00:17:47.715 "dma_device_id": "system", 00:17:47.715 "dma_device_type": 1 00:17:47.715 }, 00:17:47.715 { 00:17:47.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.715 "dma_device_type": 2 00:17:47.715 } 00:17:47.715 ], 00:17:47.715 "driver_specific": {} 00:17:47.715 }' 00:17:47.715 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:47.973 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:47.973 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:47.973 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:47.973 03:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:47.973 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:47.973 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:47.973 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:47.973 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:47.973 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:48.231 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:48.231 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:48.231 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:48.231 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:48.231 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:48.489 "name": "BaseBdev2", 00:17:48.489 "aliases": [ 00:17:48.489 "ed8efa7b-75d3-431d-9265-af80853d156d" 00:17:48.489 ], 00:17:48.489 "product_name": "Malloc disk", 00:17:48.489 "block_size": 512, 00:17:48.489 "num_blocks": 65536, 00:17:48.489 "uuid": "ed8efa7b-75d3-431d-9265-af80853d156d", 00:17:48.489 "assigned_rate_limits": { 00:17:48.489 "rw_ios_per_sec": 0, 00:17:48.489 "rw_mbytes_per_sec": 0, 00:17:48.489 "r_mbytes_per_sec": 0, 00:17:48.489 "w_mbytes_per_sec": 0 00:17:48.489 }, 00:17:48.489 "claimed": true, 00:17:48.489 "claim_type": "exclusive_write", 00:17:48.489 "zoned": false, 00:17:48.489 "supported_io_types": { 00:17:48.489 "read": true, 00:17:48.489 "write": true, 00:17:48.489 "unmap": true, 00:17:48.489 "write_zeroes": true, 00:17:48.489 "flush": true, 00:17:48.489 "reset": true, 00:17:48.489 "compare": false, 00:17:48.489 "compare_and_write": false, 00:17:48.489 "abort": true, 00:17:48.489 "nvme_admin": false, 00:17:48.489 "nvme_io": false 00:17:48.489 }, 00:17:48.489 "memory_domains": [ 00:17:48.489 { 00:17:48.489 "dma_device_id": "system", 00:17:48.489 "dma_device_type": 1 00:17:48.489 }, 00:17:48.489 { 00:17:48.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.489 "dma_device_type": 2 00:17:48.489 } 00:17:48.489 ], 00:17:48.489 "driver_specific": {} 00:17:48.489 }' 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.489 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:48.747 03:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:49.005 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:49.005 "name": "BaseBdev3", 00:17:49.005 "aliases": [ 00:17:49.005 "5943876c-263a-4587-a22b-ebd2cff99067" 00:17:49.005 ], 00:17:49.005 "product_name": "Malloc disk", 00:17:49.005 "block_size": 512, 00:17:49.005 "num_blocks": 65536, 00:17:49.005 "uuid": "5943876c-263a-4587-a22b-ebd2cff99067", 00:17:49.005 "assigned_rate_limits": { 00:17:49.005 "rw_ios_per_sec": 0, 00:17:49.005 "rw_mbytes_per_sec": 0, 00:17:49.005 "r_mbytes_per_sec": 0, 00:17:49.005 "w_mbytes_per_sec": 0 00:17:49.005 }, 00:17:49.005 "claimed": true, 00:17:49.005 "claim_type": "exclusive_write", 00:17:49.005 "zoned": false, 00:17:49.005 "supported_io_types": { 00:17:49.005 "read": true, 00:17:49.005 "write": true, 00:17:49.005 "unmap": true, 00:17:49.005 "write_zeroes": true, 00:17:49.005 "flush": true, 00:17:49.005 "reset": true, 00:17:49.005 "compare": false, 00:17:49.005 "compare_and_write": false, 00:17:49.005 "abort": true, 00:17:49.005 "nvme_admin": false, 00:17:49.005 "nvme_io": false 00:17:49.005 }, 00:17:49.005 "memory_domains": [ 00:17:49.005 { 00:17:49.005 "dma_device_id": "system", 00:17:49.005 "dma_device_type": 1 00:17:49.005 }, 00:17:49.005 { 00:17:49.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.005 "dma_device_type": 2 00:17:49.005 } 00:17:49.005 ], 00:17:49.005 "driver_specific": {} 00:17:49.005 }' 00:17:49.005 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:49.005 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:49.263 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:49.520 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:49.520 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:49.520 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:49.520 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:49.778 "name": "BaseBdev4", 00:17:49.778 "aliases": [ 00:17:49.778 "a602f821-c561-480d-9450-06652b0252ee" 00:17:49.778 ], 00:17:49.778 "product_name": "Malloc disk", 00:17:49.778 "block_size": 512, 00:17:49.778 "num_blocks": 65536, 00:17:49.778 "uuid": "a602f821-c561-480d-9450-06652b0252ee", 00:17:49.778 "assigned_rate_limits": { 00:17:49.778 "rw_ios_per_sec": 0, 00:17:49.778 "rw_mbytes_per_sec": 0, 00:17:49.778 "r_mbytes_per_sec": 0, 00:17:49.778 "w_mbytes_per_sec": 0 00:17:49.778 }, 00:17:49.778 "claimed": true, 00:17:49.778 "claim_type": "exclusive_write", 00:17:49.778 "zoned": false, 00:17:49.778 "supported_io_types": { 00:17:49.778 "read": true, 00:17:49.778 "write": true, 00:17:49.778 "unmap": true, 00:17:49.778 "write_zeroes": true, 00:17:49.778 "flush": true, 00:17:49.778 "reset": true, 00:17:49.778 "compare": false, 00:17:49.778 "compare_and_write": false, 00:17:49.778 "abort": true, 00:17:49.778 "nvme_admin": false, 00:17:49.778 "nvme_io": false 00:17:49.778 }, 00:17:49.778 "memory_domains": [ 00:17:49.778 { 00:17:49.778 "dma_device_id": "system", 00:17:49.778 "dma_device_type": 1 00:17:49.778 }, 00:17:49.778 { 00:17:49.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.778 "dma_device_type": 2 00:17:49.778 } 00:17:49.778 ], 00:17:49.778 "driver_specific": {} 00:17:49.778 }' 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.778 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.036 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.036 03:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.036 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.036 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.036 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:50.036 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:50.295 [2024-05-15 03:12:21.322610] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.295 [2024-05-15 03:12:21.322633] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.295 [2024-05-15 03:12:21.322680] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.295 [2024-05-15 03:12:21.322745] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.295 [2024-05-15 03:12:21.322754] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2537ab0 name Existed_Raid, state offline 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4124169 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4124169 ']' 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4124169 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4124169 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4124169' 00:17:50.295 killing process with pid 4124169 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4124169 00:17:50.295 [2024-05-15 03:12:21.389532] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.295 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4124169 00:17:50.295 [2024-05-15 03:12:21.448773] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.860 03:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:17:50.860 00:17:50.861 real 0m32.432s 00:17:50.861 user 1m0.553s 00:17:50.861 sys 0m4.624s 00:17:50.861 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:50.861 03:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.861 ************************************ 00:17:50.861 END TEST raid_state_function_test_sb 00:17:50.861 ************************************ 00:17:50.861 03:12:21 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:50.861 03:12:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:50.861 03:12:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:50.861 03:12:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.861 ************************************ 00:17:50.861 START TEST raid_superblock_test 00:17:50.861 ************************************ 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4130571 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4130571 /var/tmp/spdk-raid.sock 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4130571 ']' 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:50.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.861 03:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.861 [2024-05-15 03:12:21.934280] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:17:50.861 [2024-05-15 03:12:21.934335] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130571 ] 00:17:51.118 [2024-05-15 03:12:22.034715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.118 [2024-05-15 03:12:22.127831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.118 [2024-05-15 03:12:22.191605] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.118 [2024-05-15 03:12:22.191650] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.051 03:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.052 03:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:52.052 malloc1 00:17:52.052 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.310 [2024-05-15 03:12:23.374275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.310 [2024-05-15 03:12:23.374320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.310 [2024-05-15 03:12:23.374338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x172da00 00:17:52.310 [2024-05-15 03:12:23.374347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.310 [2024-05-15 03:12:23.375960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.310 [2024-05-15 03:12:23.375986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.310 pt1 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.310 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:52.566 malloc2 00:17:52.566 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.823 [2024-05-15 03:12:23.892179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.823 [2024-05-15 03:12:23.892221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.823 [2024-05-15 03:12:23.892241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x172e5f0 00:17:52.823 [2024-05-15 03:12:23.892255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.823 [2024-05-15 03:12:23.893783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.824 [2024-05-15 03:12:23.893811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.824 pt2 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.824 03:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:53.081 malloc3 00:17:53.081 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.338 [2024-05-15 03:12:24.394043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.338 [2024-05-15 03:12:24.394083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.338 [2024-05-15 03:12:24.394099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d3900 00:17:53.338 [2024-05-15 03:12:24.394108] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.338 [2024-05-15 03:12:24.395549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.338 [2024-05-15 03:12:24.395573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.338 pt3 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.338 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:53.595 malloc4 00:17:53.595 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:53.853 [2024-05-15 03:12:24.895656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:53.853 [2024-05-15 03:12:24.895693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.853 [2024-05-15 03:12:24.895709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1725630 00:17:53.853 [2024-05-15 03:12:24.895719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.853 [2024-05-15 03:12:24.897186] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.853 [2024-05-15 03:12:24.897211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:53.853 pt4 00:17:53.853 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.853 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.853 03:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:54.110 [2024-05-15 03:12:25.144350] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.110 [2024-05-15 03:12:25.145641] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.110 [2024-05-15 03:12:25.145695] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.110 [2024-05-15 03:12:25.145741] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:54.110 [2024-05-15 03:12:25.145924] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1726900 00:17:54.110 [2024-05-15 03:12:25.145935] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:54.110 [2024-05-15 03:12:25.146130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17268d0 00:17:54.110 [2024-05-15 03:12:25.146284] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1726900 00:17:54.110 [2024-05-15 03:12:25.146294] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1726900 00:17:54.110 [2024-05-15 03:12:25.146388] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.110 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.368 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.368 "name": "raid_bdev1", 00:17:54.368 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:17:54.368 "strip_size_kb": 64, 00:17:54.368 "state": "online", 00:17:54.368 "raid_level": "concat", 00:17:54.368 "superblock": true, 00:17:54.368 "num_base_bdevs": 4, 00:17:54.368 "num_base_bdevs_discovered": 4, 00:17:54.368 "num_base_bdevs_operational": 4, 00:17:54.368 "base_bdevs_list": [ 00:17:54.368 { 00:17:54.368 "name": "pt1", 00:17:54.368 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:17:54.368 "is_configured": true, 00:17:54.368 "data_offset": 2048, 00:17:54.368 "data_size": 63488 00:17:54.368 }, 00:17:54.368 { 00:17:54.368 "name": "pt2", 00:17:54.368 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:17:54.368 "is_configured": true, 00:17:54.368 "data_offset": 2048, 00:17:54.368 "data_size": 63488 00:17:54.368 }, 00:17:54.368 { 00:17:54.368 "name": "pt3", 00:17:54.368 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:17:54.368 "is_configured": true, 00:17:54.368 "data_offset": 2048, 00:17:54.368 "data_size": 63488 00:17:54.368 }, 00:17:54.368 { 00:17:54.368 "name": "pt4", 00:17:54.368 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:17:54.368 "is_configured": true, 00:17:54.368 "data_offset": 2048, 00:17:54.368 "data_size": 63488 00:17:54.368 } 00:17:54.368 ] 00:17:54.368 }' 00:17:54.368 03:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.368 03:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:54.932 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.190 [2024-05-15 03:12:26.243578] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:55.190 "name": "raid_bdev1", 00:17:55.190 "aliases": [ 00:17:55.190 "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a" 00:17:55.190 ], 00:17:55.190 "product_name": "Raid Volume", 00:17:55.190 "block_size": 512, 00:17:55.190 "num_blocks": 253952, 00:17:55.190 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:17:55.190 "assigned_rate_limits": { 00:17:55.190 "rw_ios_per_sec": 0, 00:17:55.190 "rw_mbytes_per_sec": 0, 00:17:55.190 "r_mbytes_per_sec": 0, 00:17:55.190 "w_mbytes_per_sec": 0 00:17:55.190 }, 00:17:55.190 "claimed": false, 00:17:55.190 "zoned": false, 00:17:55.190 "supported_io_types": { 00:17:55.190 "read": true, 00:17:55.190 "write": true, 00:17:55.190 "unmap": true, 00:17:55.190 "write_zeroes": true, 00:17:55.190 "flush": true, 00:17:55.190 "reset": true, 00:17:55.190 "compare": false, 00:17:55.190 "compare_and_write": false, 00:17:55.190 "abort": false, 00:17:55.190 "nvme_admin": false, 00:17:55.190 "nvme_io": false 00:17:55.190 }, 00:17:55.190 "memory_domains": [ 00:17:55.190 { 00:17:55.190 "dma_device_id": "system", 00:17:55.190 "dma_device_type": 1 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.190 "dma_device_type": 2 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "system", 00:17:55.190 "dma_device_type": 1 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.190 "dma_device_type": 2 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "system", 00:17:55.190 "dma_device_type": 1 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.190 "dma_device_type": 2 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "system", 00:17:55.190 "dma_device_type": 1 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.190 "dma_device_type": 2 00:17:55.190 } 00:17:55.190 ], 00:17:55.190 "driver_specific": { 00:17:55.190 "raid": { 00:17:55.190 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:17:55.190 "strip_size_kb": 64, 00:17:55.190 "state": "online", 00:17:55.190 "raid_level": "concat", 00:17:55.190 "superblock": true, 00:17:55.190 "num_base_bdevs": 4, 00:17:55.190 "num_base_bdevs_discovered": 4, 00:17:55.190 "num_base_bdevs_operational": 4, 00:17:55.190 "base_bdevs_list": [ 00:17:55.190 { 00:17:55.190 "name": "pt1", 00:17:55.190 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:17:55.190 "is_configured": true, 00:17:55.190 "data_offset": 2048, 00:17:55.190 "data_size": 63488 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "name": "pt2", 00:17:55.190 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:17:55.190 "is_configured": true, 00:17:55.190 "data_offset": 2048, 00:17:55.190 "data_size": 63488 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "name": "pt3", 00:17:55.190 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:17:55.190 "is_configured": true, 00:17:55.190 "data_offset": 2048, 00:17:55.190 "data_size": 63488 00:17:55.190 }, 00:17:55.190 { 00:17:55.190 "name": "pt4", 00:17:55.190 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:17:55.190 "is_configured": true, 00:17:55.190 "data_offset": 2048, 00:17:55.190 "data_size": 63488 00:17:55.190 } 00:17:55.190 ] 00:17:55.190 } 00:17:55.190 } 00:17:55.190 }' 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:55.190 pt2 00:17:55.190 pt3 00:17:55.190 pt4' 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:55.190 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:55.448 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:55.448 "name": "pt1", 00:17:55.448 "aliases": [ 00:17:55.448 "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406" 00:17:55.448 ], 00:17:55.448 "product_name": "passthru", 00:17:55.448 "block_size": 512, 00:17:55.448 "num_blocks": 65536, 00:17:55.448 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:17:55.448 "assigned_rate_limits": { 00:17:55.448 "rw_ios_per_sec": 0, 00:17:55.448 "rw_mbytes_per_sec": 0, 00:17:55.448 "r_mbytes_per_sec": 0, 00:17:55.448 "w_mbytes_per_sec": 0 00:17:55.448 }, 00:17:55.448 "claimed": true, 00:17:55.448 "claim_type": "exclusive_write", 00:17:55.448 "zoned": false, 00:17:55.448 "supported_io_types": { 00:17:55.448 "read": true, 00:17:55.448 "write": true, 00:17:55.448 "unmap": true, 00:17:55.448 "write_zeroes": true, 00:17:55.448 "flush": true, 00:17:55.448 "reset": true, 00:17:55.448 "compare": false, 00:17:55.448 "compare_and_write": false, 00:17:55.448 "abort": true, 00:17:55.448 "nvme_admin": false, 00:17:55.448 "nvme_io": false 00:17:55.448 }, 00:17:55.448 "memory_domains": [ 00:17:55.448 { 00:17:55.448 "dma_device_id": "system", 00:17:55.448 "dma_device_type": 1 00:17:55.448 }, 00:17:55.448 { 00:17:55.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.448 "dma_device_type": 2 00:17:55.448 } 00:17:55.448 ], 00:17:55.448 "driver_specific": { 00:17:55.448 "passthru": { 00:17:55.448 "name": "pt1", 00:17:55.448 "base_bdev_name": "malloc1" 00:17:55.448 } 00:17:55.448 } 00:17:55.448 }' 00:17:55.448 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.706 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:55.964 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:55.964 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:55.964 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:55.964 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:55.964 03:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:56.221 "name": "pt2", 00:17:56.221 "aliases": [ 00:17:56.221 "979b7ad2-bf13-5718-89ae-019fdabe59c1" 00:17:56.221 ], 00:17:56.221 "product_name": "passthru", 00:17:56.221 "block_size": 512, 00:17:56.221 "num_blocks": 65536, 00:17:56.221 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:17:56.221 "assigned_rate_limits": { 00:17:56.221 "rw_ios_per_sec": 0, 00:17:56.221 "rw_mbytes_per_sec": 0, 00:17:56.221 "r_mbytes_per_sec": 0, 00:17:56.221 "w_mbytes_per_sec": 0 00:17:56.221 }, 00:17:56.221 "claimed": true, 00:17:56.221 "claim_type": "exclusive_write", 00:17:56.221 "zoned": false, 00:17:56.221 "supported_io_types": { 00:17:56.221 "read": true, 00:17:56.221 "write": true, 00:17:56.221 "unmap": true, 00:17:56.221 "write_zeroes": true, 00:17:56.221 "flush": true, 00:17:56.221 "reset": true, 00:17:56.221 "compare": false, 00:17:56.221 "compare_and_write": false, 00:17:56.221 "abort": true, 00:17:56.221 "nvme_admin": false, 00:17:56.221 "nvme_io": false 00:17:56.221 }, 00:17:56.221 "memory_domains": [ 00:17:56.221 { 00:17:56.221 "dma_device_id": "system", 00:17:56.221 "dma_device_type": 1 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.221 "dma_device_type": 2 00:17:56.221 } 00:17:56.221 ], 00:17:56.221 "driver_specific": { 00:17:56.221 "passthru": { 00:17:56.221 "name": "pt2", 00:17:56.221 "base_bdev_name": "malloc2" 00:17:56.221 } 00:17:56.221 } 00:17:56.221 }' 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.221 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:56.479 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:56.736 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:56.736 "name": "pt3", 00:17:56.736 "aliases": [ 00:17:56.736 "e52e9022-914d-518c-b875-4791dce08f73" 00:17:56.736 ], 00:17:56.736 "product_name": "passthru", 00:17:56.736 "block_size": 512, 00:17:56.736 "num_blocks": 65536, 00:17:56.737 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:17:56.737 "assigned_rate_limits": { 00:17:56.737 "rw_ios_per_sec": 0, 00:17:56.737 "rw_mbytes_per_sec": 0, 00:17:56.737 "r_mbytes_per_sec": 0, 00:17:56.737 "w_mbytes_per_sec": 0 00:17:56.737 }, 00:17:56.737 "claimed": true, 00:17:56.737 "claim_type": "exclusive_write", 00:17:56.737 "zoned": false, 00:17:56.737 "supported_io_types": { 00:17:56.737 "read": true, 00:17:56.737 "write": true, 00:17:56.737 "unmap": true, 00:17:56.737 "write_zeroes": true, 00:17:56.737 "flush": true, 00:17:56.737 "reset": true, 00:17:56.737 "compare": false, 00:17:56.737 "compare_and_write": false, 00:17:56.737 "abort": true, 00:17:56.737 "nvme_admin": false, 00:17:56.737 "nvme_io": false 00:17:56.737 }, 00:17:56.737 "memory_domains": [ 00:17:56.737 { 00:17:56.737 "dma_device_id": "system", 00:17:56.737 "dma_device_type": 1 00:17:56.737 }, 00:17:56.737 { 00:17:56.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.737 "dma_device_type": 2 00:17:56.737 } 00:17:56.737 ], 00:17:56.737 "driver_specific": { 00:17:56.737 "passthru": { 00:17:56.737 "name": "pt3", 00:17:56.737 "base_bdev_name": "malloc3" 00:17:56.737 } 00:17:56.737 } 00:17:56.737 }' 00:17:56.737 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.737 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.994 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:56.994 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.994 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.994 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.994 03:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.994 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.994 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.994 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:56.994 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:57.252 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:57.252 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:57.252 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:57.252 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:57.520 "name": "pt4", 00:17:57.520 "aliases": [ 00:17:57.520 "cda235c9-7e76-53e2-91d4-eec39acd316c" 00:17:57.520 ], 00:17:57.520 "product_name": "passthru", 00:17:57.520 "block_size": 512, 00:17:57.520 "num_blocks": 65536, 00:17:57.520 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:17:57.520 "assigned_rate_limits": { 00:17:57.520 "rw_ios_per_sec": 0, 00:17:57.520 "rw_mbytes_per_sec": 0, 00:17:57.520 "r_mbytes_per_sec": 0, 00:17:57.520 "w_mbytes_per_sec": 0 00:17:57.520 }, 00:17:57.520 "claimed": true, 00:17:57.520 "claim_type": "exclusive_write", 00:17:57.520 "zoned": false, 00:17:57.520 "supported_io_types": { 00:17:57.520 "read": true, 00:17:57.520 "write": true, 00:17:57.520 "unmap": true, 00:17:57.520 "write_zeroes": true, 00:17:57.520 "flush": true, 00:17:57.520 "reset": true, 00:17:57.520 "compare": false, 00:17:57.520 "compare_and_write": false, 00:17:57.520 "abort": true, 00:17:57.520 "nvme_admin": false, 00:17:57.520 "nvme_io": false 00:17:57.520 }, 00:17:57.520 "memory_domains": [ 00:17:57.520 { 00:17:57.520 "dma_device_id": "system", 00:17:57.520 "dma_device_type": 1 00:17:57.520 }, 00:17:57.520 { 00:17:57.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.520 "dma_device_type": 2 00:17:57.520 } 00:17:57.520 ], 00:17:57.520 "driver_specific": { 00:17:57.520 "passthru": { 00:17:57.520 "name": "pt4", 00:17:57.520 "base_bdev_name": "malloc4" 00:17:57.520 } 00:17:57.520 } 00:17:57.520 }' 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.520 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.823 03:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:58.101 [2024-05-15 03:12:29.059143] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.101 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a 00:17:58.101 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a ']' 00:17:58.101 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:58.358 [2024-05-15 03:12:29.319527] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.358 [2024-05-15 03:12:29.319543] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.358 [2024-05-15 03:12:29.319591] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.358 [2024-05-15 03:12:29.319659] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.358 [2024-05-15 03:12:29.319668] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1726900 name raid_bdev1, state offline 00:17:58.358 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.358 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.615 03:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:58.873 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.873 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:59.130 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.130 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:59.388 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:59.388 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:59.646 03:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:59.904 [2024-05-15 03:12:31.020019] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:59.904 [2024-05-15 03:12:31.021449] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:59.904 [2024-05-15 03:12:31.021495] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:59.904 [2024-05-15 03:12:31.021530] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:59.904 [2024-05-15 03:12:31.021574] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:59.904 [2024-05-15 03:12:31.021611] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:59.904 [2024-05-15 03:12:31.021632] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:59.904 [2024-05-15 03:12:31.021650] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:59.904 [2024-05-15 03:12:31.021673] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.904 [2024-05-15 03:12:31.021681] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x172dc30 name raid_bdev1, state configuring 00:17:59.904 request: 00:17:59.904 { 00:17:59.904 "name": "raid_bdev1", 00:17:59.904 "raid_level": "concat", 00:17:59.904 "base_bdevs": [ 00:17:59.904 "malloc1", 00:17:59.904 "malloc2", 00:17:59.904 "malloc3", 00:17:59.904 "malloc4" 00:17:59.904 ], 00:17:59.904 "superblock": false, 00:17:59.904 "strip_size_kb": 64, 00:17:59.904 "method": "bdev_raid_create", 00:17:59.904 "req_id": 1 00:17:59.904 } 00:17:59.904 Got JSON-RPC error response 00:17:59.904 response: 00:17:59.904 { 00:17:59.904 "code": -17, 00:17:59.904 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:59.904 } 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.904 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:00.162 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:00.162 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:00.162 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.419 [2024-05-15 03:12:31.525284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.419 [2024-05-15 03:12:31.525320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.419 [2024-05-15 03:12:31.525337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1727040 00:18:00.419 [2024-05-15 03:12:31.525346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.419 [2024-05-15 03:12:31.527014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.419 [2024-05-15 03:12:31.527041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.419 [2024-05-15 03:12:31.527102] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:00.419 [2024-05-15 03:12:31.527128] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.419 pt1 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.419 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.677 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.677 "name": "raid_bdev1", 00:18:00.677 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:18:00.677 "strip_size_kb": 64, 00:18:00.677 "state": "configuring", 00:18:00.677 "raid_level": "concat", 00:18:00.677 "superblock": true, 00:18:00.677 "num_base_bdevs": 4, 00:18:00.677 "num_base_bdevs_discovered": 1, 00:18:00.677 "num_base_bdevs_operational": 4, 00:18:00.677 "base_bdevs_list": [ 00:18:00.677 { 00:18:00.677 "name": "pt1", 00:18:00.677 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:18:00.677 "is_configured": true, 00:18:00.677 "data_offset": 2048, 00:18:00.677 "data_size": 63488 00:18:00.677 }, 00:18:00.677 { 00:18:00.677 "name": null, 00:18:00.677 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:18:00.677 "is_configured": false, 00:18:00.677 "data_offset": 2048, 00:18:00.677 "data_size": 63488 00:18:00.677 }, 00:18:00.677 { 00:18:00.677 "name": null, 00:18:00.677 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:18:00.677 "is_configured": false, 00:18:00.677 "data_offset": 2048, 00:18:00.677 "data_size": 63488 00:18:00.677 }, 00:18:00.677 { 00:18:00.677 "name": null, 00:18:00.677 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:18:00.677 "is_configured": false, 00:18:00.677 "data_offset": 2048, 00:18:00.677 "data_size": 63488 00:18:00.677 } 00:18:00.677 ] 00:18:00.677 }' 00:18:00.677 03:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.677 03:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.242 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:01.242 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.501 [2024-05-15 03:12:32.519970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.501 [2024-05-15 03:12:32.520018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.501 [2024-05-15 03:12:32.520035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1726dd0 00:18:01.501 [2024-05-15 03:12:32.520045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.501 [2024-05-15 03:12:32.520397] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.501 [2024-05-15 03:12:32.520414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.501 [2024-05-15 03:12:32.520472] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:01.501 [2024-05-15 03:12:32.520491] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.501 pt2 00:18:01.501 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:01.759 [2024-05-15 03:12:32.760628] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.759 03:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.017 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.017 "name": "raid_bdev1", 00:18:02.017 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:18:02.017 "strip_size_kb": 64, 00:18:02.017 "state": "configuring", 00:18:02.017 "raid_level": "concat", 00:18:02.017 "superblock": true, 00:18:02.017 "num_base_bdevs": 4, 00:18:02.017 "num_base_bdevs_discovered": 1, 00:18:02.017 "num_base_bdevs_operational": 4, 00:18:02.017 "base_bdevs_list": [ 00:18:02.017 { 00:18:02.017 "name": "pt1", 00:18:02.017 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:18:02.017 "is_configured": true, 00:18:02.017 "data_offset": 2048, 00:18:02.017 "data_size": 63488 00:18:02.017 }, 00:18:02.017 { 00:18:02.017 "name": null, 00:18:02.017 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:18:02.017 "is_configured": false, 00:18:02.017 "data_offset": 2048, 00:18:02.017 "data_size": 63488 00:18:02.017 }, 00:18:02.017 { 00:18:02.017 "name": null, 00:18:02.017 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:18:02.017 "is_configured": false, 00:18:02.017 "data_offset": 2048, 00:18:02.017 "data_size": 63488 00:18:02.017 }, 00:18:02.017 { 00:18:02.017 "name": null, 00:18:02.017 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:18:02.018 "is_configured": false, 00:18:02.018 "data_offset": 2048, 00:18:02.018 "data_size": 63488 00:18:02.018 } 00:18:02.018 ] 00:18:02.018 }' 00:18:02.018 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.018 03:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.583 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:02.583 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.583 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.840 [2024-05-15 03:12:33.855564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.840 [2024-05-15 03:12:33.855616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.840 [2024-05-15 03:12:33.855634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18def80 00:18:02.840 [2024-05-15 03:12:33.855644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.840 [2024-05-15 03:12:33.856001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.840 [2024-05-15 03:12:33.856018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.840 [2024-05-15 03:12:33.856078] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:02.840 [2024-05-15 03:12:33.856096] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.840 pt2 00:18:02.840 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:02.840 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.840 03:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.098 [2024-05-15 03:12:34.096201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.098 [2024-05-15 03:12:34.096241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.098 [2024-05-15 03:12:34.096257] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d67d0 00:18:03.098 [2024-05-15 03:12:34.096266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.098 [2024-05-15 03:12:34.096594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.098 [2024-05-15 03:12:34.096610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.098 [2024-05-15 03:12:34.096668] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:03.098 [2024-05-15 03:12:34.096685] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.098 pt3 00:18:03.098 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.098 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.098 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.356 [2024-05-15 03:12:34.344871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.356 [2024-05-15 03:12:34.344910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.356 [2024-05-15 03:12:34.344926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17264a0 00:18:03.356 [2024-05-15 03:12:34.344936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.356 [2024-05-15 03:12:34.345269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.356 [2024-05-15 03:12:34.345286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.356 [2024-05-15 03:12:34.345346] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:03.356 [2024-05-15 03:12:34.345365] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.356 [2024-05-15 03:12:34.345489] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x17274e0 00:18:03.356 [2024-05-15 03:12:34.345498] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:03.356 [2024-05-15 03:12:34.345679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d6c00 00:18:03.356 [2024-05-15 03:12:34.345817] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17274e0 00:18:03.356 [2024-05-15 03:12:34.345825] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17274e0 00:18:03.356 [2024-05-15 03:12:34.345934] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.356 pt4 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.356 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.614 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.614 "name": "raid_bdev1", 00:18:03.614 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:18:03.614 "strip_size_kb": 64, 00:18:03.614 "state": "online", 00:18:03.614 "raid_level": "concat", 00:18:03.614 "superblock": true, 00:18:03.614 "num_base_bdevs": 4, 00:18:03.614 "num_base_bdevs_discovered": 4, 00:18:03.614 "num_base_bdevs_operational": 4, 00:18:03.614 "base_bdevs_list": [ 00:18:03.614 { 00:18:03.614 "name": "pt1", 00:18:03.614 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:18:03.614 "is_configured": true, 00:18:03.614 "data_offset": 2048, 00:18:03.614 "data_size": 63488 00:18:03.614 }, 00:18:03.614 { 00:18:03.614 "name": "pt2", 00:18:03.614 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:18:03.614 "is_configured": true, 00:18:03.614 "data_offset": 2048, 00:18:03.614 "data_size": 63488 00:18:03.614 }, 00:18:03.614 { 00:18:03.614 "name": "pt3", 00:18:03.614 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:18:03.614 "is_configured": true, 00:18:03.614 "data_offset": 2048, 00:18:03.614 "data_size": 63488 00:18:03.614 }, 00:18:03.614 { 00:18:03.614 "name": "pt4", 00:18:03.614 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:18:03.614 "is_configured": true, 00:18:03.614 "data_offset": 2048, 00:18:03.614 "data_size": 63488 00:18:03.614 } 00:18:03.614 ] 00:18:03.614 }' 00:18:03.614 03:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.614 03:12:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:04.180 [2024-05-15 03:12:35.279676] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.180 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:04.180 "name": "raid_bdev1", 00:18:04.180 "aliases": [ 00:18:04.180 "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a" 00:18:04.180 ], 00:18:04.180 "product_name": "Raid Volume", 00:18:04.180 "block_size": 512, 00:18:04.180 "num_blocks": 253952, 00:18:04.180 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:18:04.180 "assigned_rate_limits": { 00:18:04.180 "rw_ios_per_sec": 0, 00:18:04.180 "rw_mbytes_per_sec": 0, 00:18:04.180 "r_mbytes_per_sec": 0, 00:18:04.180 "w_mbytes_per_sec": 0 00:18:04.180 }, 00:18:04.180 "claimed": false, 00:18:04.180 "zoned": false, 00:18:04.180 "supported_io_types": { 00:18:04.180 "read": true, 00:18:04.180 "write": true, 00:18:04.180 "unmap": true, 00:18:04.180 "write_zeroes": true, 00:18:04.180 "flush": true, 00:18:04.180 "reset": true, 00:18:04.180 "compare": false, 00:18:04.180 "compare_and_write": false, 00:18:04.180 "abort": false, 00:18:04.180 "nvme_admin": false, 00:18:04.180 "nvme_io": false 00:18:04.180 }, 00:18:04.180 "memory_domains": [ 00:18:04.180 { 00:18:04.180 "dma_device_id": "system", 00:18:04.180 "dma_device_type": 1 00:18:04.180 }, 00:18:04.180 { 00:18:04.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.180 "dma_device_type": 2 00:18:04.180 }, 00:18:04.180 { 00:18:04.180 "dma_device_id": "system", 00:18:04.180 "dma_device_type": 1 00:18:04.180 }, 00:18:04.180 { 00:18:04.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.181 "dma_device_type": 2 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "dma_device_id": "system", 00:18:04.181 "dma_device_type": 1 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.181 "dma_device_type": 2 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "dma_device_id": "system", 00:18:04.181 "dma_device_type": 1 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.181 "dma_device_type": 2 00:18:04.181 } 00:18:04.181 ], 00:18:04.181 "driver_specific": { 00:18:04.181 "raid": { 00:18:04.181 "uuid": "2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a", 00:18:04.181 "strip_size_kb": 64, 00:18:04.181 "state": "online", 00:18:04.181 "raid_level": "concat", 00:18:04.181 "superblock": true, 00:18:04.181 "num_base_bdevs": 4, 00:18:04.181 "num_base_bdevs_discovered": 4, 00:18:04.181 "num_base_bdevs_operational": 4, 00:18:04.181 "base_bdevs_list": [ 00:18:04.181 { 00:18:04.181 "name": "pt1", 00:18:04.181 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:18:04.181 "is_configured": true, 00:18:04.181 "data_offset": 2048, 00:18:04.181 "data_size": 63488 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "name": "pt2", 00:18:04.181 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:18:04.181 "is_configured": true, 00:18:04.181 "data_offset": 2048, 00:18:04.181 "data_size": 63488 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "name": "pt3", 00:18:04.181 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:18:04.181 "is_configured": true, 00:18:04.181 "data_offset": 2048, 00:18:04.181 "data_size": 63488 00:18:04.181 }, 00:18:04.181 { 00:18:04.181 "name": "pt4", 00:18:04.181 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:18:04.181 "is_configured": true, 00:18:04.181 "data_offset": 2048, 00:18:04.181 "data_size": 63488 00:18:04.181 } 00:18:04.181 ] 00:18:04.181 } 00:18:04.181 } 00:18:04.181 }' 00:18:04.181 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.439 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:18:04.439 pt2 00:18:04.439 pt3 00:18:04.439 pt4' 00:18:04.439 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:04.439 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:04.439 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:04.697 "name": "pt1", 00:18:04.697 "aliases": [ 00:18:04.697 "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406" 00:18:04.697 ], 00:18:04.697 "product_name": "passthru", 00:18:04.697 "block_size": 512, 00:18:04.697 "num_blocks": 65536, 00:18:04.697 "uuid": "5e8aa6d9-2c10-53a7-95e5-7c0ab5d7f406", 00:18:04.697 "assigned_rate_limits": { 00:18:04.697 "rw_ios_per_sec": 0, 00:18:04.697 "rw_mbytes_per_sec": 0, 00:18:04.697 "r_mbytes_per_sec": 0, 00:18:04.697 "w_mbytes_per_sec": 0 00:18:04.697 }, 00:18:04.697 "claimed": true, 00:18:04.697 "claim_type": "exclusive_write", 00:18:04.697 "zoned": false, 00:18:04.697 "supported_io_types": { 00:18:04.697 "read": true, 00:18:04.697 "write": true, 00:18:04.697 "unmap": true, 00:18:04.697 "write_zeroes": true, 00:18:04.697 "flush": true, 00:18:04.697 "reset": true, 00:18:04.697 "compare": false, 00:18:04.697 "compare_and_write": false, 00:18:04.697 "abort": true, 00:18:04.697 "nvme_admin": false, 00:18:04.697 "nvme_io": false 00:18:04.697 }, 00:18:04.697 "memory_domains": [ 00:18:04.697 { 00:18:04.697 "dma_device_id": "system", 00:18:04.697 "dma_device_type": 1 00:18:04.697 }, 00:18:04.697 { 00:18:04.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.697 "dma_device_type": 2 00:18:04.697 } 00:18:04.697 ], 00:18:04.697 "driver_specific": { 00:18:04.697 "passthru": { 00:18:04.697 "name": "pt1", 00:18:04.697 "base_bdev_name": "malloc1" 00:18:04.697 } 00:18:04.697 } 00:18:04.697 }' 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:04.697 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:04.955 03:12:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:05.213 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:05.213 "name": "pt2", 00:18:05.213 "aliases": [ 00:18:05.213 "979b7ad2-bf13-5718-89ae-019fdabe59c1" 00:18:05.213 ], 00:18:05.213 "product_name": "passthru", 00:18:05.213 "block_size": 512, 00:18:05.213 "num_blocks": 65536, 00:18:05.213 "uuid": "979b7ad2-bf13-5718-89ae-019fdabe59c1", 00:18:05.213 "assigned_rate_limits": { 00:18:05.213 "rw_ios_per_sec": 0, 00:18:05.213 "rw_mbytes_per_sec": 0, 00:18:05.213 "r_mbytes_per_sec": 0, 00:18:05.213 "w_mbytes_per_sec": 0 00:18:05.213 }, 00:18:05.213 "claimed": true, 00:18:05.213 "claim_type": "exclusive_write", 00:18:05.213 "zoned": false, 00:18:05.213 "supported_io_types": { 00:18:05.213 "read": true, 00:18:05.213 "write": true, 00:18:05.213 "unmap": true, 00:18:05.213 "write_zeroes": true, 00:18:05.213 "flush": true, 00:18:05.213 "reset": true, 00:18:05.213 "compare": false, 00:18:05.213 "compare_and_write": false, 00:18:05.213 "abort": true, 00:18:05.213 "nvme_admin": false, 00:18:05.213 "nvme_io": false 00:18:05.213 }, 00:18:05.213 "memory_domains": [ 00:18:05.213 { 00:18:05.213 "dma_device_id": "system", 00:18:05.213 "dma_device_type": 1 00:18:05.213 }, 00:18:05.213 { 00:18:05.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.213 "dma_device_type": 2 00:18:05.213 } 00:18:05.213 ], 00:18:05.213 "driver_specific": { 00:18:05.213 "passthru": { 00:18:05.213 "name": "pt2", 00:18:05.213 "base_bdev_name": "malloc2" 00:18:05.213 } 00:18:05.213 } 00:18:05.213 }' 00:18:05.213 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:05.213 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:05.213 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:05.213 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:05.470 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:05.727 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:05.727 "name": "pt3", 00:18:05.727 "aliases": [ 00:18:05.727 "e52e9022-914d-518c-b875-4791dce08f73" 00:18:05.727 ], 00:18:05.727 "product_name": "passthru", 00:18:05.727 "block_size": 512, 00:18:05.727 "num_blocks": 65536, 00:18:05.727 "uuid": "e52e9022-914d-518c-b875-4791dce08f73", 00:18:05.727 "assigned_rate_limits": { 00:18:05.727 "rw_ios_per_sec": 0, 00:18:05.727 "rw_mbytes_per_sec": 0, 00:18:05.727 "r_mbytes_per_sec": 0, 00:18:05.727 "w_mbytes_per_sec": 0 00:18:05.727 }, 00:18:05.727 "claimed": true, 00:18:05.727 "claim_type": "exclusive_write", 00:18:05.727 "zoned": false, 00:18:05.727 "supported_io_types": { 00:18:05.727 "read": true, 00:18:05.727 "write": true, 00:18:05.727 "unmap": true, 00:18:05.727 "write_zeroes": true, 00:18:05.727 "flush": true, 00:18:05.727 "reset": true, 00:18:05.727 "compare": false, 00:18:05.727 "compare_and_write": false, 00:18:05.727 "abort": true, 00:18:05.727 "nvme_admin": false, 00:18:05.727 "nvme_io": false 00:18:05.727 }, 00:18:05.727 "memory_domains": [ 00:18:05.727 { 00:18:05.727 "dma_device_id": "system", 00:18:05.727 "dma_device_type": 1 00:18:05.727 }, 00:18:05.727 { 00:18:05.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.727 "dma_device_type": 2 00:18:05.727 } 00:18:05.727 ], 00:18:05.727 "driver_specific": { 00:18:05.727 "passthru": { 00:18:05.727 "name": "pt3", 00:18:05.727 "base_bdev_name": "malloc3" 00:18:05.727 } 00:18:05.727 } 00:18:05.727 }' 00:18:05.727 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:05.985 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:05.985 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:05.985 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:05.985 03:12:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:05.985 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.985 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:05.985 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:05.985 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.985 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:06.243 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:06.243 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:06.243 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:06.243 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:06.243 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:06.500 "name": "pt4", 00:18:06.500 "aliases": [ 00:18:06.500 "cda235c9-7e76-53e2-91d4-eec39acd316c" 00:18:06.500 ], 00:18:06.500 "product_name": "passthru", 00:18:06.500 "block_size": 512, 00:18:06.500 "num_blocks": 65536, 00:18:06.500 "uuid": "cda235c9-7e76-53e2-91d4-eec39acd316c", 00:18:06.500 "assigned_rate_limits": { 00:18:06.500 "rw_ios_per_sec": 0, 00:18:06.500 "rw_mbytes_per_sec": 0, 00:18:06.500 "r_mbytes_per_sec": 0, 00:18:06.500 "w_mbytes_per_sec": 0 00:18:06.500 }, 00:18:06.500 "claimed": true, 00:18:06.500 "claim_type": "exclusive_write", 00:18:06.500 "zoned": false, 00:18:06.500 "supported_io_types": { 00:18:06.500 "read": true, 00:18:06.500 "write": true, 00:18:06.500 "unmap": true, 00:18:06.500 "write_zeroes": true, 00:18:06.500 "flush": true, 00:18:06.500 "reset": true, 00:18:06.500 "compare": false, 00:18:06.500 "compare_and_write": false, 00:18:06.500 "abort": true, 00:18:06.500 "nvme_admin": false, 00:18:06.500 "nvme_io": false 00:18:06.500 }, 00:18:06.500 "memory_domains": [ 00:18:06.500 { 00:18:06.500 "dma_device_id": "system", 00:18:06.500 "dma_device_type": 1 00:18:06.500 }, 00:18:06.500 { 00:18:06.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.500 "dma_device_type": 2 00:18:06.500 } 00:18:06.500 ], 00:18:06.500 "driver_specific": { 00:18:06.500 "passthru": { 00:18:06.500 "name": "pt4", 00:18:06.500 "base_bdev_name": "malloc4" 00:18:06.500 } 00:18:06.500 } 00:18:06.500 }' 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.500 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:06.758 03:12:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.015 [2024-05-15 03:12:38.079203] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a '!=' 2fdd65ee-576e-48e9-bcb7-f8d8f9ba4d8a ']' 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4130571 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4130571 ']' 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4130571 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4130571 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4130571' 00:18:07.015 killing process with pid 4130571 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4130571 00:18:07.015 [2024-05-15 03:12:38.146452] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.015 [2024-05-15 03:12:38.146520] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.015 [2024-05-15 03:12:38.146583] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.015 [2024-05-15 03:12:38.146593] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17274e0 name raid_bdev1, state offline 00:18:07.015 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4130571 00:18:07.274 [2024-05-15 03:12:38.180718] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.274 03:12:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:18:07.274 00:18:07.274 real 0m16.527s 00:18:07.274 user 0m30.496s 00:18:07.274 sys 0m2.298s 00:18:07.274 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:07.274 03:12:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.274 ************************************ 00:18:07.274 END TEST raid_superblock_test 00:18:07.274 ************************************ 00:18:07.532 03:12:38 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:18:07.532 03:12:38 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:07.532 03:12:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:07.532 03:12:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:07.532 03:12:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.532 ************************************ 00:18:07.532 START TEST raid_state_function_test 00:18:07.532 ************************************ 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:18:07.532 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=4133616 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4133616' 00:18:07.533 Process raid pid: 4133616 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 4133616 /var/tmp/spdk-raid.sock 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 4133616 ']' 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:07.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:07.533 03:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.533 [2024-05-15 03:12:38.544642] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:07.533 [2024-05-15 03:12:38.544693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.533 [2024-05-15 03:12:38.642987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.791 [2024-05-15 03:12:38.736812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.791 [2024-05-15 03:12:38.800624] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.791 [2024-05-15 03:12:38.800659] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.357 03:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:08.357 03:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:18:08.357 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:08.615 [2024-05-15 03:12:39.719448] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.615 [2024-05-15 03:12:39.719486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.615 [2024-05-15 03:12:39.719495] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.615 [2024-05-15 03:12:39.719504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.615 [2024-05-15 03:12:39.719511] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:08.615 [2024-05-15 03:12:39.719519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:08.615 [2024-05-15 03:12:39.719527] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:08.615 [2024-05-15 03:12:39.719534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.615 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.873 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.873 "name": "Existed_Raid", 00:18:08.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.873 "strip_size_kb": 0, 00:18:08.873 "state": "configuring", 00:18:08.873 "raid_level": "raid1", 00:18:08.873 "superblock": false, 00:18:08.873 "num_base_bdevs": 4, 00:18:08.873 "num_base_bdevs_discovered": 0, 00:18:08.873 "num_base_bdevs_operational": 4, 00:18:08.873 "base_bdevs_list": [ 00:18:08.873 { 00:18:08.873 "name": "BaseBdev1", 00:18:08.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.873 "is_configured": false, 00:18:08.873 "data_offset": 0, 00:18:08.873 "data_size": 0 00:18:08.873 }, 00:18:08.873 { 00:18:08.873 "name": "BaseBdev2", 00:18:08.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.873 "is_configured": false, 00:18:08.873 "data_offset": 0, 00:18:08.873 "data_size": 0 00:18:08.873 }, 00:18:08.873 { 00:18:08.873 "name": "BaseBdev3", 00:18:08.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.873 "is_configured": false, 00:18:08.873 "data_offset": 0, 00:18:08.873 "data_size": 0 00:18:08.873 }, 00:18:08.873 { 00:18:08.873 "name": "BaseBdev4", 00:18:08.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.873 "is_configured": false, 00:18:08.873 "data_offset": 0, 00:18:08.873 "data_size": 0 00:18:08.873 } 00:18:08.873 ] 00:18:08.873 }' 00:18:08.873 03:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.873 03:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.806 03:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:09.806 [2024-05-15 03:12:40.842338] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.806 [2024-05-15 03:12:40.842369] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21d6e00 name Existed_Raid, state configuring 00:18:09.806 03:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:10.063 [2024-05-15 03:12:41.091009] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.063 [2024-05-15 03:12:41.091041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.063 [2024-05-15 03:12:41.091050] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.063 [2024-05-15 03:12:41.091059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.063 [2024-05-15 03:12:41.091066] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.063 [2024-05-15 03:12:41.091075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.063 [2024-05-15 03:12:41.091082] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.063 [2024-05-15 03:12:41.091090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.063 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:10.320 [2024-05-15 03:12:41.353307] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.320 BaseBdev1 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:10.320 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.576 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:10.834 [ 00:18:10.834 { 00:18:10.834 "name": "BaseBdev1", 00:18:10.834 "aliases": [ 00:18:10.834 "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6" 00:18:10.834 ], 00:18:10.834 "product_name": "Malloc disk", 00:18:10.834 "block_size": 512, 00:18:10.834 "num_blocks": 65536, 00:18:10.834 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:10.834 "assigned_rate_limits": { 00:18:10.834 "rw_ios_per_sec": 0, 00:18:10.834 "rw_mbytes_per_sec": 0, 00:18:10.834 "r_mbytes_per_sec": 0, 00:18:10.834 "w_mbytes_per_sec": 0 00:18:10.834 }, 00:18:10.834 "claimed": true, 00:18:10.834 "claim_type": "exclusive_write", 00:18:10.834 "zoned": false, 00:18:10.834 "supported_io_types": { 00:18:10.834 "read": true, 00:18:10.834 "write": true, 00:18:10.834 "unmap": true, 00:18:10.834 "write_zeroes": true, 00:18:10.834 "flush": true, 00:18:10.834 "reset": true, 00:18:10.834 "compare": false, 00:18:10.834 "compare_and_write": false, 00:18:10.834 "abort": true, 00:18:10.834 "nvme_admin": false, 00:18:10.834 "nvme_io": false 00:18:10.834 }, 00:18:10.834 "memory_domains": [ 00:18:10.834 { 00:18:10.834 "dma_device_id": "system", 00:18:10.834 "dma_device_type": 1 00:18:10.834 }, 00:18:10.834 { 00:18:10.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.834 "dma_device_type": 2 00:18:10.834 } 00:18:10.834 ], 00:18:10.834 "driver_specific": {} 00:18:10.834 } 00:18:10.834 ] 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.834 03:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.092 03:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.092 "name": "Existed_Raid", 00:18:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.092 "strip_size_kb": 0, 00:18:11.092 "state": "configuring", 00:18:11.092 "raid_level": "raid1", 00:18:11.092 "superblock": false, 00:18:11.092 "num_base_bdevs": 4, 00:18:11.092 "num_base_bdevs_discovered": 1, 00:18:11.092 "num_base_bdevs_operational": 4, 00:18:11.092 "base_bdevs_list": [ 00:18:11.092 { 00:18:11.092 "name": "BaseBdev1", 00:18:11.092 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:11.092 "is_configured": true, 00:18:11.092 "data_offset": 0, 00:18:11.092 "data_size": 65536 00:18:11.092 }, 00:18:11.092 { 00:18:11.092 "name": "BaseBdev2", 00:18:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.092 "is_configured": false, 00:18:11.092 "data_offset": 0, 00:18:11.092 "data_size": 0 00:18:11.092 }, 00:18:11.092 { 00:18:11.092 "name": "BaseBdev3", 00:18:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.092 "is_configured": false, 00:18:11.092 "data_offset": 0, 00:18:11.092 "data_size": 0 00:18:11.092 }, 00:18:11.092 { 00:18:11.092 "name": "BaseBdev4", 00:18:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.092 "is_configured": false, 00:18:11.092 "data_offset": 0, 00:18:11.092 "data_size": 0 00:18:11.092 } 00:18:11.092 ] 00:18:11.092 }' 00:18:11.092 03:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.092 03:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.691 03:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.949 [2024-05-15 03:12:42.961629] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.949 [2024-05-15 03:12:42.961670] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21d70a0 name Existed_Raid, state configuring 00:18:11.949 03:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:12.207 [2024-05-15 03:12:43.218341] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.207 [2024-05-15 03:12:43.219862] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.207 [2024-05-15 03:12:43.219892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.207 [2024-05-15 03:12:43.219901] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.207 [2024-05-15 03:12:43.219910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.207 [2024-05-15 03:12:43.219918] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:12.207 [2024-05-15 03:12:43.219926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.207 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.472 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.472 "name": "Existed_Raid", 00:18:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.472 "strip_size_kb": 0, 00:18:12.472 "state": "configuring", 00:18:12.472 "raid_level": "raid1", 00:18:12.472 "superblock": false, 00:18:12.472 "num_base_bdevs": 4, 00:18:12.472 "num_base_bdevs_discovered": 1, 00:18:12.472 "num_base_bdevs_operational": 4, 00:18:12.472 "base_bdevs_list": [ 00:18:12.472 { 00:18:12.472 "name": "BaseBdev1", 00:18:12.472 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:12.472 "is_configured": true, 00:18:12.472 "data_offset": 0, 00:18:12.472 "data_size": 65536 00:18:12.472 }, 00:18:12.472 { 00:18:12.472 "name": "BaseBdev2", 00:18:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.472 "is_configured": false, 00:18:12.472 "data_offset": 0, 00:18:12.472 "data_size": 0 00:18:12.472 }, 00:18:12.472 { 00:18:12.472 "name": "BaseBdev3", 00:18:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.472 "is_configured": false, 00:18:12.472 "data_offset": 0, 00:18:12.472 "data_size": 0 00:18:12.472 }, 00:18:12.472 { 00:18:12.472 "name": "BaseBdev4", 00:18:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.472 "is_configured": false, 00:18:12.472 "data_offset": 0, 00:18:12.472 "data_size": 0 00:18:12.472 } 00:18:12.472 ] 00:18:12.472 }' 00:18:12.472 03:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.472 03:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.039 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:13.296 [2024-05-15 03:12:44.364603] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.296 BaseBdev2 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:13.296 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.553 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.811 [ 00:18:13.811 { 00:18:13.811 "name": "BaseBdev2", 00:18:13.811 "aliases": [ 00:18:13.811 "fb1a6792-a954-44a2-ac57-ccd6215ede2b" 00:18:13.811 ], 00:18:13.811 "product_name": "Malloc disk", 00:18:13.811 "block_size": 512, 00:18:13.811 "num_blocks": 65536, 00:18:13.811 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:13.811 "assigned_rate_limits": { 00:18:13.811 "rw_ios_per_sec": 0, 00:18:13.811 "rw_mbytes_per_sec": 0, 00:18:13.811 "r_mbytes_per_sec": 0, 00:18:13.811 "w_mbytes_per_sec": 0 00:18:13.811 }, 00:18:13.811 "claimed": true, 00:18:13.811 "claim_type": "exclusive_write", 00:18:13.811 "zoned": false, 00:18:13.811 "supported_io_types": { 00:18:13.811 "read": true, 00:18:13.811 "write": true, 00:18:13.811 "unmap": true, 00:18:13.811 "write_zeroes": true, 00:18:13.811 "flush": true, 00:18:13.811 "reset": true, 00:18:13.811 "compare": false, 00:18:13.811 "compare_and_write": false, 00:18:13.811 "abort": true, 00:18:13.811 "nvme_admin": false, 00:18:13.811 "nvme_io": false 00:18:13.811 }, 00:18:13.811 "memory_domains": [ 00:18:13.811 { 00:18:13.811 "dma_device_id": "system", 00:18:13.811 "dma_device_type": 1 00:18:13.811 }, 00:18:13.811 { 00:18:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.811 "dma_device_type": 2 00:18:13.811 } 00:18:13.811 ], 00:18:13.811 "driver_specific": {} 00:18:13.811 } 00:18:13.811 ] 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.811 03:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.069 03:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.069 "name": "Existed_Raid", 00:18:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.069 "strip_size_kb": 0, 00:18:14.069 "state": "configuring", 00:18:14.069 "raid_level": "raid1", 00:18:14.069 "superblock": false, 00:18:14.069 "num_base_bdevs": 4, 00:18:14.069 "num_base_bdevs_discovered": 2, 00:18:14.069 "num_base_bdevs_operational": 4, 00:18:14.069 "base_bdevs_list": [ 00:18:14.069 { 00:18:14.069 "name": "BaseBdev1", 00:18:14.069 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:14.069 "is_configured": true, 00:18:14.069 "data_offset": 0, 00:18:14.069 "data_size": 65536 00:18:14.069 }, 00:18:14.069 { 00:18:14.069 "name": "BaseBdev2", 00:18:14.069 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:14.069 "is_configured": true, 00:18:14.069 "data_offset": 0, 00:18:14.069 "data_size": 65536 00:18:14.069 }, 00:18:14.069 { 00:18:14.069 "name": "BaseBdev3", 00:18:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.069 "is_configured": false, 00:18:14.069 "data_offset": 0, 00:18:14.069 "data_size": 0 00:18:14.069 }, 00:18:14.069 { 00:18:14.069 "name": "BaseBdev4", 00:18:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.069 "is_configured": false, 00:18:14.069 "data_offset": 0, 00:18:14.069 "data_size": 0 00:18:14.069 } 00:18:14.069 ] 00:18:14.069 }' 00:18:14.069 03:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.069 03:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.634 03:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:14.892 [2024-05-15 03:12:46.008280] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.892 BaseBdev3 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:14.892 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.150 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:15.407 [ 00:18:15.407 { 00:18:15.407 "name": "BaseBdev3", 00:18:15.407 "aliases": [ 00:18:15.407 "15e25c8d-cebb-4f9f-b4ab-6691dd092b17" 00:18:15.407 ], 00:18:15.407 "product_name": "Malloc disk", 00:18:15.407 "block_size": 512, 00:18:15.407 "num_blocks": 65536, 00:18:15.407 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:15.407 "assigned_rate_limits": { 00:18:15.407 "rw_ios_per_sec": 0, 00:18:15.407 "rw_mbytes_per_sec": 0, 00:18:15.407 "r_mbytes_per_sec": 0, 00:18:15.407 "w_mbytes_per_sec": 0 00:18:15.407 }, 00:18:15.407 "claimed": true, 00:18:15.407 "claim_type": "exclusive_write", 00:18:15.407 "zoned": false, 00:18:15.407 "supported_io_types": { 00:18:15.407 "read": true, 00:18:15.407 "write": true, 00:18:15.407 "unmap": true, 00:18:15.407 "write_zeroes": true, 00:18:15.407 "flush": true, 00:18:15.407 "reset": true, 00:18:15.407 "compare": false, 00:18:15.407 "compare_and_write": false, 00:18:15.407 "abort": true, 00:18:15.407 "nvme_admin": false, 00:18:15.407 "nvme_io": false 00:18:15.407 }, 00:18:15.407 "memory_domains": [ 00:18:15.407 { 00:18:15.407 "dma_device_id": "system", 00:18:15.407 "dma_device_type": 1 00:18:15.407 }, 00:18:15.407 { 00:18:15.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.407 "dma_device_type": 2 00:18:15.407 } 00:18:15.407 ], 00:18:15.407 "driver_specific": {} 00:18:15.407 } 00:18:15.407 ] 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.407 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.665 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.665 "name": "Existed_Raid", 00:18:15.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.665 "strip_size_kb": 0, 00:18:15.665 "state": "configuring", 00:18:15.665 "raid_level": "raid1", 00:18:15.665 "superblock": false, 00:18:15.665 "num_base_bdevs": 4, 00:18:15.665 "num_base_bdevs_discovered": 3, 00:18:15.665 "num_base_bdevs_operational": 4, 00:18:15.665 "base_bdevs_list": [ 00:18:15.665 { 00:18:15.665 "name": "BaseBdev1", 00:18:15.665 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:15.665 "is_configured": true, 00:18:15.665 "data_offset": 0, 00:18:15.665 "data_size": 65536 00:18:15.665 }, 00:18:15.665 { 00:18:15.665 "name": "BaseBdev2", 00:18:15.665 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:15.665 "is_configured": true, 00:18:15.665 "data_offset": 0, 00:18:15.665 "data_size": 65536 00:18:15.665 }, 00:18:15.665 { 00:18:15.665 "name": "BaseBdev3", 00:18:15.665 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:15.665 "is_configured": true, 00:18:15.665 "data_offset": 0, 00:18:15.665 "data_size": 65536 00:18:15.665 }, 00:18:15.665 { 00:18:15.665 "name": "BaseBdev4", 00:18:15.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.665 "is_configured": false, 00:18:15.665 "data_offset": 0, 00:18:15.665 "data_size": 0 00:18:15.665 } 00:18:15.665 ] 00:18:15.665 }' 00:18:15.665 03:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.665 03:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:16.598 [2024-05-15 03:12:47.668010] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:16.598 [2024-05-15 03:12:47.668052] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x21d6670 00:18:16.598 [2024-05-15 03:12:47.668058] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.598 [2024-05-15 03:12:47.668256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21d8a30 00:18:16.598 [2024-05-15 03:12:47.668392] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21d6670 00:18:16.598 [2024-05-15 03:12:47.668400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21d6670 00:18:16.598 [2024-05-15 03:12:47.668564] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.598 BaseBdev4 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:16.598 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.855 03:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:17.113 [ 00:18:17.113 { 00:18:17.113 "name": "BaseBdev4", 00:18:17.113 "aliases": [ 00:18:17.113 "11149556-1143-48d0-b68c-5e0f3d1fba6a" 00:18:17.113 ], 00:18:17.113 "product_name": "Malloc disk", 00:18:17.113 "block_size": 512, 00:18:17.113 "num_blocks": 65536, 00:18:17.113 "uuid": "11149556-1143-48d0-b68c-5e0f3d1fba6a", 00:18:17.113 "assigned_rate_limits": { 00:18:17.113 "rw_ios_per_sec": 0, 00:18:17.113 "rw_mbytes_per_sec": 0, 00:18:17.113 "r_mbytes_per_sec": 0, 00:18:17.113 "w_mbytes_per_sec": 0 00:18:17.113 }, 00:18:17.113 "claimed": true, 00:18:17.113 "claim_type": "exclusive_write", 00:18:17.113 "zoned": false, 00:18:17.113 "supported_io_types": { 00:18:17.113 "read": true, 00:18:17.113 "write": true, 00:18:17.113 "unmap": true, 00:18:17.113 "write_zeroes": true, 00:18:17.113 "flush": true, 00:18:17.113 "reset": true, 00:18:17.113 "compare": false, 00:18:17.113 "compare_and_write": false, 00:18:17.113 "abort": true, 00:18:17.113 "nvme_admin": false, 00:18:17.113 "nvme_io": false 00:18:17.113 }, 00:18:17.113 "memory_domains": [ 00:18:17.113 { 00:18:17.113 "dma_device_id": "system", 00:18:17.113 "dma_device_type": 1 00:18:17.113 }, 00:18:17.113 { 00:18:17.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.113 "dma_device_type": 2 00:18:17.113 } 00:18:17.113 ], 00:18:17.113 "driver_specific": {} 00:18:17.113 } 00:18:17.113 ] 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.113 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.371 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.371 "name": "Existed_Raid", 00:18:17.371 "uuid": "cbc91da7-6e56-49e6-8649-66c566c70989", 00:18:17.371 "strip_size_kb": 0, 00:18:17.371 "state": "online", 00:18:17.371 "raid_level": "raid1", 00:18:17.371 "superblock": false, 00:18:17.371 "num_base_bdevs": 4, 00:18:17.371 "num_base_bdevs_discovered": 4, 00:18:17.371 "num_base_bdevs_operational": 4, 00:18:17.371 "base_bdevs_list": [ 00:18:17.371 { 00:18:17.371 "name": "BaseBdev1", 00:18:17.371 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:17.371 "is_configured": true, 00:18:17.371 "data_offset": 0, 00:18:17.371 "data_size": 65536 00:18:17.371 }, 00:18:17.371 { 00:18:17.371 "name": "BaseBdev2", 00:18:17.371 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:17.371 "is_configured": true, 00:18:17.371 "data_offset": 0, 00:18:17.371 "data_size": 65536 00:18:17.371 }, 00:18:17.371 { 00:18:17.371 "name": "BaseBdev3", 00:18:17.371 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:17.371 "is_configured": true, 00:18:17.371 "data_offset": 0, 00:18:17.371 "data_size": 65536 00:18:17.371 }, 00:18:17.371 { 00:18:17.371 "name": "BaseBdev4", 00:18:17.371 "uuid": "11149556-1143-48d0-b68c-5e0f3d1fba6a", 00:18:17.371 "is_configured": true, 00:18:17.371 "data_offset": 0, 00:18:17.371 "data_size": 65536 00:18:17.371 } 00:18:17.371 ] 00:18:17.371 }' 00:18:17.371 03:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.371 03:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:17.938 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:18.197 [2024-05-15 03:12:49.296733] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.197 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:18.197 "name": "Existed_Raid", 00:18:18.197 "aliases": [ 00:18:18.197 "cbc91da7-6e56-49e6-8649-66c566c70989" 00:18:18.197 ], 00:18:18.197 "product_name": "Raid Volume", 00:18:18.197 "block_size": 512, 00:18:18.197 "num_blocks": 65536, 00:18:18.197 "uuid": "cbc91da7-6e56-49e6-8649-66c566c70989", 00:18:18.197 "assigned_rate_limits": { 00:18:18.197 "rw_ios_per_sec": 0, 00:18:18.197 "rw_mbytes_per_sec": 0, 00:18:18.197 "r_mbytes_per_sec": 0, 00:18:18.197 "w_mbytes_per_sec": 0 00:18:18.197 }, 00:18:18.197 "claimed": false, 00:18:18.197 "zoned": false, 00:18:18.197 "supported_io_types": { 00:18:18.197 "read": true, 00:18:18.197 "write": true, 00:18:18.197 "unmap": false, 00:18:18.197 "write_zeroes": true, 00:18:18.197 "flush": false, 00:18:18.197 "reset": true, 00:18:18.197 "compare": false, 00:18:18.197 "compare_and_write": false, 00:18:18.197 "abort": false, 00:18:18.197 "nvme_admin": false, 00:18:18.197 "nvme_io": false 00:18:18.197 }, 00:18:18.197 "memory_domains": [ 00:18:18.197 { 00:18:18.197 "dma_device_id": "system", 00:18:18.197 "dma_device_type": 1 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.197 "dma_device_type": 2 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "system", 00:18:18.197 "dma_device_type": 1 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.197 "dma_device_type": 2 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "system", 00:18:18.197 "dma_device_type": 1 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.197 "dma_device_type": 2 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "system", 00:18:18.197 "dma_device_type": 1 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.197 "dma_device_type": 2 00:18:18.197 } 00:18:18.197 ], 00:18:18.197 "driver_specific": { 00:18:18.197 "raid": { 00:18:18.197 "uuid": "cbc91da7-6e56-49e6-8649-66c566c70989", 00:18:18.197 "strip_size_kb": 0, 00:18:18.197 "state": "online", 00:18:18.197 "raid_level": "raid1", 00:18:18.197 "superblock": false, 00:18:18.197 "num_base_bdevs": 4, 00:18:18.197 "num_base_bdevs_discovered": 4, 00:18:18.197 "num_base_bdevs_operational": 4, 00:18:18.197 "base_bdevs_list": [ 00:18:18.197 { 00:18:18.197 "name": "BaseBdev1", 00:18:18.197 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:18.197 "is_configured": true, 00:18:18.197 "data_offset": 0, 00:18:18.197 "data_size": 65536 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "name": "BaseBdev2", 00:18:18.197 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:18.197 "is_configured": true, 00:18:18.197 "data_offset": 0, 00:18:18.197 "data_size": 65536 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "name": "BaseBdev3", 00:18:18.197 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:18.197 "is_configured": true, 00:18:18.197 "data_offset": 0, 00:18:18.197 "data_size": 65536 00:18:18.197 }, 00:18:18.197 { 00:18:18.197 "name": "BaseBdev4", 00:18:18.197 "uuid": "11149556-1143-48d0-b68c-5e0f3d1fba6a", 00:18:18.197 "is_configured": true, 00:18:18.197 "data_offset": 0, 00:18:18.197 "data_size": 65536 00:18:18.197 } 00:18:18.197 ] 00:18:18.197 } 00:18:18.197 } 00:18:18.197 }' 00:18:18.197 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:18:18.455 BaseBdev2 00:18:18.455 BaseBdev3 00:18:18.455 BaseBdev4' 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:18.455 "name": "BaseBdev1", 00:18:18.455 "aliases": [ 00:18:18.455 "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6" 00:18:18.455 ], 00:18:18.455 "product_name": "Malloc disk", 00:18:18.455 "block_size": 512, 00:18:18.455 "num_blocks": 65536, 00:18:18.455 "uuid": "a7cd8f76-e670-4061-a6e3-3a0ec6f4d8f6", 00:18:18.455 "assigned_rate_limits": { 00:18:18.455 "rw_ios_per_sec": 0, 00:18:18.455 "rw_mbytes_per_sec": 0, 00:18:18.455 "r_mbytes_per_sec": 0, 00:18:18.455 "w_mbytes_per_sec": 0 00:18:18.455 }, 00:18:18.455 "claimed": true, 00:18:18.455 "claim_type": "exclusive_write", 00:18:18.455 "zoned": false, 00:18:18.455 "supported_io_types": { 00:18:18.455 "read": true, 00:18:18.455 "write": true, 00:18:18.455 "unmap": true, 00:18:18.455 "write_zeroes": true, 00:18:18.455 "flush": true, 00:18:18.455 "reset": true, 00:18:18.455 "compare": false, 00:18:18.455 "compare_and_write": false, 00:18:18.455 "abort": true, 00:18:18.455 "nvme_admin": false, 00:18:18.455 "nvme_io": false 00:18:18.455 }, 00:18:18.455 "memory_domains": [ 00:18:18.455 { 00:18:18.455 "dma_device_id": "system", 00:18:18.455 "dma_device_type": 1 00:18:18.455 }, 00:18:18.455 { 00:18:18.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.455 "dma_device_type": 2 00:18:18.455 } 00:18:18.455 ], 00:18:18.455 "driver_specific": {} 00:18:18.455 }' 00:18:18.455 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:18.713 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:18.970 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.970 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:18.970 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:18.970 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:18.971 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:18.971 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:18.971 03:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:19.228 "name": "BaseBdev2", 00:18:19.228 "aliases": [ 00:18:19.228 "fb1a6792-a954-44a2-ac57-ccd6215ede2b" 00:18:19.228 ], 00:18:19.228 "product_name": "Malloc disk", 00:18:19.228 "block_size": 512, 00:18:19.228 "num_blocks": 65536, 00:18:19.228 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:19.228 "assigned_rate_limits": { 00:18:19.228 "rw_ios_per_sec": 0, 00:18:19.228 "rw_mbytes_per_sec": 0, 00:18:19.228 "r_mbytes_per_sec": 0, 00:18:19.228 "w_mbytes_per_sec": 0 00:18:19.228 }, 00:18:19.228 "claimed": true, 00:18:19.228 "claim_type": "exclusive_write", 00:18:19.228 "zoned": false, 00:18:19.228 "supported_io_types": { 00:18:19.228 "read": true, 00:18:19.228 "write": true, 00:18:19.228 "unmap": true, 00:18:19.228 "write_zeroes": true, 00:18:19.228 "flush": true, 00:18:19.228 "reset": true, 00:18:19.228 "compare": false, 00:18:19.228 "compare_and_write": false, 00:18:19.228 "abort": true, 00:18:19.228 "nvme_admin": false, 00:18:19.228 "nvme_io": false 00:18:19.228 }, 00:18:19.228 "memory_domains": [ 00:18:19.228 { 00:18:19.228 "dma_device_id": "system", 00:18:19.228 "dma_device_type": 1 00:18:19.228 }, 00:18:19.228 { 00:18:19.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.228 "dma_device_type": 2 00:18:19.228 } 00:18:19.228 ], 00:18:19.228 "driver_specific": {} 00:18:19.228 }' 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.228 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:19.485 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:19.742 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:19.742 "name": "BaseBdev3", 00:18:19.742 "aliases": [ 00:18:19.742 "15e25c8d-cebb-4f9f-b4ab-6691dd092b17" 00:18:19.742 ], 00:18:19.742 "product_name": "Malloc disk", 00:18:19.742 "block_size": 512, 00:18:19.742 "num_blocks": 65536, 00:18:19.742 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:19.742 "assigned_rate_limits": { 00:18:19.742 "rw_ios_per_sec": 0, 00:18:19.742 "rw_mbytes_per_sec": 0, 00:18:19.742 "r_mbytes_per_sec": 0, 00:18:19.742 "w_mbytes_per_sec": 0 00:18:19.742 }, 00:18:19.742 "claimed": true, 00:18:19.742 "claim_type": "exclusive_write", 00:18:19.742 "zoned": false, 00:18:19.742 "supported_io_types": { 00:18:19.742 "read": true, 00:18:19.742 "write": true, 00:18:19.742 "unmap": true, 00:18:19.742 "write_zeroes": true, 00:18:19.742 "flush": true, 00:18:19.742 "reset": true, 00:18:19.742 "compare": false, 00:18:19.742 "compare_and_write": false, 00:18:19.742 "abort": true, 00:18:19.742 "nvme_admin": false, 00:18:19.742 "nvme_io": false 00:18:19.742 }, 00:18:19.742 "memory_domains": [ 00:18:19.742 { 00:18:19.742 "dma_device_id": "system", 00:18:19.742 "dma_device_type": 1 00:18:19.742 }, 00:18:19.742 { 00:18:19.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.742 "dma_device_type": 2 00:18:19.742 } 00:18:19.742 ], 00:18:19.742 "driver_specific": {} 00:18:19.742 }' 00:18:19.742 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.742 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.742 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:19.742 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.028 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.028 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.028 03:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:20.028 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:20.284 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:20.284 "name": "BaseBdev4", 00:18:20.284 "aliases": [ 00:18:20.284 "11149556-1143-48d0-b68c-5e0f3d1fba6a" 00:18:20.284 ], 00:18:20.284 "product_name": "Malloc disk", 00:18:20.284 "block_size": 512, 00:18:20.284 "num_blocks": 65536, 00:18:20.284 "uuid": "11149556-1143-48d0-b68c-5e0f3d1fba6a", 00:18:20.284 "assigned_rate_limits": { 00:18:20.284 "rw_ios_per_sec": 0, 00:18:20.284 "rw_mbytes_per_sec": 0, 00:18:20.284 "r_mbytes_per_sec": 0, 00:18:20.284 "w_mbytes_per_sec": 0 00:18:20.284 }, 00:18:20.284 "claimed": true, 00:18:20.284 "claim_type": "exclusive_write", 00:18:20.284 "zoned": false, 00:18:20.284 "supported_io_types": { 00:18:20.284 "read": true, 00:18:20.284 "write": true, 00:18:20.284 "unmap": true, 00:18:20.284 "write_zeroes": true, 00:18:20.284 "flush": true, 00:18:20.284 "reset": true, 00:18:20.284 "compare": false, 00:18:20.284 "compare_and_write": false, 00:18:20.284 "abort": true, 00:18:20.284 "nvme_admin": false, 00:18:20.284 "nvme_io": false 00:18:20.284 }, 00:18:20.284 "memory_domains": [ 00:18:20.284 { 00:18:20.284 "dma_device_id": "system", 00:18:20.284 "dma_device_type": 1 00:18:20.284 }, 00:18:20.284 { 00:18:20.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.284 "dma_device_type": 2 00:18:20.284 } 00:18:20.284 ], 00:18:20.284 "driver_specific": {} 00:18:20.284 }' 00:18:20.284 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:20.284 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.541 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.798 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.798 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:20.798 03:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:21.056 [2024-05-15 03:12:52.003713] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.056 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.314 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.314 "name": "Existed_Raid", 00:18:21.314 "uuid": "cbc91da7-6e56-49e6-8649-66c566c70989", 00:18:21.314 "strip_size_kb": 0, 00:18:21.314 "state": "online", 00:18:21.314 "raid_level": "raid1", 00:18:21.314 "superblock": false, 00:18:21.314 "num_base_bdevs": 4, 00:18:21.314 "num_base_bdevs_discovered": 3, 00:18:21.314 "num_base_bdevs_operational": 3, 00:18:21.314 "base_bdevs_list": [ 00:18:21.314 { 00:18:21.314 "name": null, 00:18:21.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.314 "is_configured": false, 00:18:21.314 "data_offset": 0, 00:18:21.314 "data_size": 65536 00:18:21.314 }, 00:18:21.314 { 00:18:21.314 "name": "BaseBdev2", 00:18:21.314 "uuid": "fb1a6792-a954-44a2-ac57-ccd6215ede2b", 00:18:21.314 "is_configured": true, 00:18:21.314 "data_offset": 0, 00:18:21.314 "data_size": 65536 00:18:21.314 }, 00:18:21.314 { 00:18:21.314 "name": "BaseBdev3", 00:18:21.314 "uuid": "15e25c8d-cebb-4f9f-b4ab-6691dd092b17", 00:18:21.314 "is_configured": true, 00:18:21.314 "data_offset": 0, 00:18:21.314 "data_size": 65536 00:18:21.314 }, 00:18:21.314 { 00:18:21.314 "name": "BaseBdev4", 00:18:21.314 "uuid": "11149556-1143-48d0-b68c-5e0f3d1fba6a", 00:18:21.314 "is_configured": true, 00:18:21.314 "data_offset": 0, 00:18:21.314 "data_size": 65536 00:18:21.314 } 00:18:21.314 ] 00:18:21.314 }' 00:18:21.314 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.314 03:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.879 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:21.879 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.879 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:21.879 03:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.137 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:22.137 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.137 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:22.394 [2024-05-15 03:12:53.348490] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.394 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.394 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.394 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.394 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:22.651 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:22.651 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.651 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:22.909 [2024-05-15 03:12:53.864156] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:22.909 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.909 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.909 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.909 03:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:23.166 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:23.166 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.166 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:23.424 [2024-05-15 03:12:54.380050] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:23.424 [2024-05-15 03:12:54.380112] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.424 [2024-05-15 03:12:54.390824] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.424 [2024-05-15 03:12:54.390893] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.424 [2024-05-15 03:12:54.390905] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21d6670 name Existed_Raid, state offline 00:18:23.424 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:23.424 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:23.424 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.424 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:23.682 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:23.940 BaseBdev2 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:23.940 03:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.198 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.455 [ 00:18:24.455 { 00:18:24.455 "name": "BaseBdev2", 00:18:24.455 "aliases": [ 00:18:24.455 "8fec9418-281c-406c-9aa6-9879aa71a5d5" 00:18:24.455 ], 00:18:24.455 "product_name": "Malloc disk", 00:18:24.455 "block_size": 512, 00:18:24.455 "num_blocks": 65536, 00:18:24.455 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:24.455 "assigned_rate_limits": { 00:18:24.456 "rw_ios_per_sec": 0, 00:18:24.456 "rw_mbytes_per_sec": 0, 00:18:24.456 "r_mbytes_per_sec": 0, 00:18:24.456 "w_mbytes_per_sec": 0 00:18:24.456 }, 00:18:24.456 "claimed": false, 00:18:24.456 "zoned": false, 00:18:24.456 "supported_io_types": { 00:18:24.456 "read": true, 00:18:24.456 "write": true, 00:18:24.456 "unmap": true, 00:18:24.456 "write_zeroes": true, 00:18:24.456 "flush": true, 00:18:24.456 "reset": true, 00:18:24.456 "compare": false, 00:18:24.456 "compare_and_write": false, 00:18:24.456 "abort": true, 00:18:24.456 "nvme_admin": false, 00:18:24.456 "nvme_io": false 00:18:24.456 }, 00:18:24.456 "memory_domains": [ 00:18:24.456 { 00:18:24.456 "dma_device_id": "system", 00:18:24.456 "dma_device_type": 1 00:18:24.456 }, 00:18:24.456 { 00:18:24.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.456 "dma_device_type": 2 00:18:24.456 } 00:18:24.456 ], 00:18:24.456 "driver_specific": {} 00:18:24.456 } 00:18:24.456 ] 00:18:24.456 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:24.456 03:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:24.456 03:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:24.456 03:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:24.713 BaseBdev3 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:24.713 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.971 03:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:25.229 [ 00:18:25.229 { 00:18:25.229 "name": "BaseBdev3", 00:18:25.229 "aliases": [ 00:18:25.229 "8aca00ee-3842-4d86-a817-cf5ef5f27168" 00:18:25.229 ], 00:18:25.229 "product_name": "Malloc disk", 00:18:25.229 "block_size": 512, 00:18:25.229 "num_blocks": 65536, 00:18:25.229 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:25.229 "assigned_rate_limits": { 00:18:25.229 "rw_ios_per_sec": 0, 00:18:25.229 "rw_mbytes_per_sec": 0, 00:18:25.229 "r_mbytes_per_sec": 0, 00:18:25.229 "w_mbytes_per_sec": 0 00:18:25.229 }, 00:18:25.229 "claimed": false, 00:18:25.229 "zoned": false, 00:18:25.229 "supported_io_types": { 00:18:25.229 "read": true, 00:18:25.229 "write": true, 00:18:25.229 "unmap": true, 00:18:25.229 "write_zeroes": true, 00:18:25.229 "flush": true, 00:18:25.229 "reset": true, 00:18:25.229 "compare": false, 00:18:25.229 "compare_and_write": false, 00:18:25.229 "abort": true, 00:18:25.229 "nvme_admin": false, 00:18:25.229 "nvme_io": false 00:18:25.229 }, 00:18:25.229 "memory_domains": [ 00:18:25.229 { 00:18:25.229 "dma_device_id": "system", 00:18:25.229 "dma_device_type": 1 00:18:25.229 }, 00:18:25.229 { 00:18:25.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.229 "dma_device_type": 2 00:18:25.229 } 00:18:25.229 ], 00:18:25.229 "driver_specific": {} 00:18:25.229 } 00:18:25.229 ] 00:18:25.229 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:25.229 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:25.229 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:25.229 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:25.500 BaseBdev4 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:25.500 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.771 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.771 [ 00:18:25.771 { 00:18:25.771 "name": "BaseBdev4", 00:18:25.771 "aliases": [ 00:18:25.771 "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa" 00:18:25.771 ], 00:18:25.771 "product_name": "Malloc disk", 00:18:25.771 "block_size": 512, 00:18:25.771 "num_blocks": 65536, 00:18:25.771 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:25.771 "assigned_rate_limits": { 00:18:25.771 "rw_ios_per_sec": 0, 00:18:25.771 "rw_mbytes_per_sec": 0, 00:18:25.771 "r_mbytes_per_sec": 0, 00:18:25.771 "w_mbytes_per_sec": 0 00:18:25.771 }, 00:18:25.771 "claimed": false, 00:18:25.771 "zoned": false, 00:18:25.771 "supported_io_types": { 00:18:25.771 "read": true, 00:18:25.771 "write": true, 00:18:25.771 "unmap": true, 00:18:25.771 "write_zeroes": true, 00:18:25.771 "flush": true, 00:18:25.771 "reset": true, 00:18:25.771 "compare": false, 00:18:25.771 "compare_and_write": false, 00:18:25.771 "abort": true, 00:18:25.771 "nvme_admin": false, 00:18:25.771 "nvme_io": false 00:18:25.771 }, 00:18:25.771 "memory_domains": [ 00:18:25.771 { 00:18:25.771 "dma_device_id": "system", 00:18:25.771 "dma_device_type": 1 00:18:25.771 }, 00:18:25.771 { 00:18:25.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.771 "dma_device_type": 2 00:18:25.771 } 00:18:25.771 ], 00:18:25.771 "driver_specific": {} 00:18:25.771 } 00:18:25.771 ] 00:18:25.771 03:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:25.771 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:25.771 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:25.771 03:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:26.029 [2024-05-15 03:12:57.147464] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.029 [2024-05-15 03:12:57.147508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.029 [2024-05-15 03:12:57.147524] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.029 [2024-05-15 03:12:57.148932] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.029 [2024-05-15 03:12:57.148974] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.029 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.287 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.287 "name": "Existed_Raid", 00:18:26.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.287 "strip_size_kb": 0, 00:18:26.287 "state": "configuring", 00:18:26.287 "raid_level": "raid1", 00:18:26.287 "superblock": false, 00:18:26.287 "num_base_bdevs": 4, 00:18:26.287 "num_base_bdevs_discovered": 3, 00:18:26.287 "num_base_bdevs_operational": 4, 00:18:26.287 "base_bdevs_list": [ 00:18:26.287 { 00:18:26.287 "name": "BaseBdev1", 00:18:26.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.287 "is_configured": false, 00:18:26.287 "data_offset": 0, 00:18:26.287 "data_size": 0 00:18:26.287 }, 00:18:26.287 { 00:18:26.287 "name": "BaseBdev2", 00:18:26.287 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:26.287 "is_configured": true, 00:18:26.287 "data_offset": 0, 00:18:26.287 "data_size": 65536 00:18:26.287 }, 00:18:26.287 { 00:18:26.287 "name": "BaseBdev3", 00:18:26.287 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:26.287 "is_configured": true, 00:18:26.287 "data_offset": 0, 00:18:26.287 "data_size": 65536 00:18:26.287 }, 00:18:26.287 { 00:18:26.287 "name": "BaseBdev4", 00:18:26.287 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:26.287 "is_configured": true, 00:18:26.287 "data_offset": 0, 00:18:26.287 "data_size": 65536 00:18:26.287 } 00:18:26.287 ] 00:18:26.287 }' 00:18:26.287 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.287 03:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.853 03:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:27.111 [2024-05-15 03:12:58.158135] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.111 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.369 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.369 "name": "Existed_Raid", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "strip_size_kb": 0, 00:18:27.369 "state": "configuring", 00:18:27.369 "raid_level": "raid1", 00:18:27.369 "superblock": false, 00:18:27.369 "num_base_bdevs": 4, 00:18:27.369 "num_base_bdevs_discovered": 2, 00:18:27.369 "num_base_bdevs_operational": 4, 00:18:27.369 "base_bdevs_list": [ 00:18:27.369 { 00:18:27.369 "name": "BaseBdev1", 00:18:27.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 0 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": null, 00:18:27.369 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:27.369 "is_configured": false, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 65536 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev3", 00:18:27.369 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:27.369 "is_configured": true, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 65536 00:18:27.369 }, 00:18:27.369 { 00:18:27.369 "name": "BaseBdev4", 00:18:27.369 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:27.369 "is_configured": true, 00:18:27.369 "data_offset": 0, 00:18:27.369 "data_size": 65536 00:18:27.369 } 00:18:27.369 ] 00:18:27.369 }' 00:18:27.369 03:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.369 03:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.934 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.934 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.191 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:18:28.191 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.448 [2024-05-15 03:12:59.460962] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.448 BaseBdev1 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:28.448 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.706 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.963 [ 00:18:28.963 { 00:18:28.963 "name": "BaseBdev1", 00:18:28.963 "aliases": [ 00:18:28.963 "0e4e0137-2912-4d3e-9fcc-bec9399a47a6" 00:18:28.963 ], 00:18:28.963 "product_name": "Malloc disk", 00:18:28.963 "block_size": 512, 00:18:28.963 "num_blocks": 65536, 00:18:28.963 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:28.963 "assigned_rate_limits": { 00:18:28.963 "rw_ios_per_sec": 0, 00:18:28.963 "rw_mbytes_per_sec": 0, 00:18:28.963 "r_mbytes_per_sec": 0, 00:18:28.963 "w_mbytes_per_sec": 0 00:18:28.963 }, 00:18:28.963 "claimed": true, 00:18:28.963 "claim_type": "exclusive_write", 00:18:28.963 "zoned": false, 00:18:28.963 "supported_io_types": { 00:18:28.963 "read": true, 00:18:28.963 "write": true, 00:18:28.963 "unmap": true, 00:18:28.963 "write_zeroes": true, 00:18:28.963 "flush": true, 00:18:28.963 "reset": true, 00:18:28.963 "compare": false, 00:18:28.963 "compare_and_write": false, 00:18:28.963 "abort": true, 00:18:28.963 "nvme_admin": false, 00:18:28.963 "nvme_io": false 00:18:28.963 }, 00:18:28.963 "memory_domains": [ 00:18:28.963 { 00:18:28.963 "dma_device_id": "system", 00:18:28.963 "dma_device_type": 1 00:18:28.963 }, 00:18:28.963 { 00:18:28.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.963 "dma_device_type": 2 00:18:28.963 } 00:18:28.963 ], 00:18:28.963 "driver_specific": {} 00:18:28.963 } 00:18:28.963 ] 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.963 03:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.963 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.963 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.963 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.220 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.220 "name": "Existed_Raid", 00:18:29.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.220 "strip_size_kb": 0, 00:18:29.220 "state": "configuring", 00:18:29.220 "raid_level": "raid1", 00:18:29.220 "superblock": false, 00:18:29.220 "num_base_bdevs": 4, 00:18:29.220 "num_base_bdevs_discovered": 3, 00:18:29.220 "num_base_bdevs_operational": 4, 00:18:29.220 "base_bdevs_list": [ 00:18:29.220 { 00:18:29.220 "name": "BaseBdev1", 00:18:29.220 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:29.220 "is_configured": true, 00:18:29.220 "data_offset": 0, 00:18:29.220 "data_size": 65536 00:18:29.220 }, 00:18:29.220 { 00:18:29.220 "name": null, 00:18:29.220 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:29.220 "is_configured": false, 00:18:29.220 "data_offset": 0, 00:18:29.220 "data_size": 65536 00:18:29.220 }, 00:18:29.220 { 00:18:29.220 "name": "BaseBdev3", 00:18:29.220 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:29.220 "is_configured": true, 00:18:29.220 "data_offset": 0, 00:18:29.220 "data_size": 65536 00:18:29.220 }, 00:18:29.220 { 00:18:29.220 "name": "BaseBdev4", 00:18:29.220 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:29.220 "is_configured": true, 00:18:29.220 "data_offset": 0, 00:18:29.220 "data_size": 65536 00:18:29.220 } 00:18:29.220 ] 00:18:29.220 }' 00:18:29.220 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.220 03:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.785 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.785 03:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.042 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:30.042 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:30.300 [2024-05-15 03:13:01.245765] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.300 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.557 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.557 "name": "Existed_Raid", 00:18:30.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.557 "strip_size_kb": 0, 00:18:30.557 "state": "configuring", 00:18:30.557 "raid_level": "raid1", 00:18:30.557 "superblock": false, 00:18:30.557 "num_base_bdevs": 4, 00:18:30.557 "num_base_bdevs_discovered": 2, 00:18:30.557 "num_base_bdevs_operational": 4, 00:18:30.557 "base_bdevs_list": [ 00:18:30.557 { 00:18:30.557 "name": "BaseBdev1", 00:18:30.557 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:30.557 "is_configured": true, 00:18:30.557 "data_offset": 0, 00:18:30.557 "data_size": 65536 00:18:30.557 }, 00:18:30.557 { 00:18:30.557 "name": null, 00:18:30.557 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:30.557 "is_configured": false, 00:18:30.557 "data_offset": 0, 00:18:30.557 "data_size": 65536 00:18:30.557 }, 00:18:30.557 { 00:18:30.558 "name": null, 00:18:30.558 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:30.558 "is_configured": false, 00:18:30.558 "data_offset": 0, 00:18:30.558 "data_size": 65536 00:18:30.558 }, 00:18:30.558 { 00:18:30.558 "name": "BaseBdev4", 00:18:30.558 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:30.558 "is_configured": true, 00:18:30.558 "data_offset": 0, 00:18:30.558 "data_size": 65536 00:18:30.558 } 00:18:30.558 ] 00:18:30.558 }' 00:18:30.558 03:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.558 03:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.122 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.122 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.380 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:18:31.380 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:31.637 [2024-05-15 03:13:02.625480] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.637 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.895 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.895 "name": "Existed_Raid", 00:18:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.895 "strip_size_kb": 0, 00:18:31.895 "state": "configuring", 00:18:31.895 "raid_level": "raid1", 00:18:31.895 "superblock": false, 00:18:31.895 "num_base_bdevs": 4, 00:18:31.895 "num_base_bdevs_discovered": 3, 00:18:31.895 "num_base_bdevs_operational": 4, 00:18:31.895 "base_bdevs_list": [ 00:18:31.895 { 00:18:31.895 "name": "BaseBdev1", 00:18:31.895 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:31.895 "is_configured": true, 00:18:31.895 "data_offset": 0, 00:18:31.895 "data_size": 65536 00:18:31.895 }, 00:18:31.895 { 00:18:31.895 "name": null, 00:18:31.895 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:31.895 "is_configured": false, 00:18:31.895 "data_offset": 0, 00:18:31.895 "data_size": 65536 00:18:31.895 }, 00:18:31.895 { 00:18:31.895 "name": "BaseBdev3", 00:18:31.895 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:31.895 "is_configured": true, 00:18:31.895 "data_offset": 0, 00:18:31.895 "data_size": 65536 00:18:31.895 }, 00:18:31.895 { 00:18:31.895 "name": "BaseBdev4", 00:18:31.895 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:31.895 "is_configured": true, 00:18:31.895 "data_offset": 0, 00:18:31.895 "data_size": 65536 00:18:31.895 } 00:18:31.895 ] 00:18:31.895 }' 00:18:31.895 03:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.895 03:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.460 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.460 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:32.718 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:18:32.718 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.976 [2024-05-15 03:13:04.009222] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.976 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.233 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.233 "name": "Existed_Raid", 00:18:33.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.233 "strip_size_kb": 0, 00:18:33.233 "state": "configuring", 00:18:33.233 "raid_level": "raid1", 00:18:33.233 "superblock": false, 00:18:33.233 "num_base_bdevs": 4, 00:18:33.233 "num_base_bdevs_discovered": 2, 00:18:33.233 "num_base_bdevs_operational": 4, 00:18:33.233 "base_bdevs_list": [ 00:18:33.233 { 00:18:33.233 "name": null, 00:18:33.233 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:33.233 "is_configured": false, 00:18:33.233 "data_offset": 0, 00:18:33.233 "data_size": 65536 00:18:33.233 }, 00:18:33.233 { 00:18:33.233 "name": null, 00:18:33.233 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:33.233 "is_configured": false, 00:18:33.233 "data_offset": 0, 00:18:33.233 "data_size": 65536 00:18:33.233 }, 00:18:33.233 { 00:18:33.233 "name": "BaseBdev3", 00:18:33.233 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:33.233 "is_configured": true, 00:18:33.233 "data_offset": 0, 00:18:33.233 "data_size": 65536 00:18:33.233 }, 00:18:33.233 { 00:18:33.233 "name": "BaseBdev4", 00:18:33.233 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:33.233 "is_configured": true, 00:18:33.233 "data_offset": 0, 00:18:33.233 "data_size": 65536 00:18:33.233 } 00:18:33.233 ] 00:18:33.233 }' 00:18:33.233 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.233 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.799 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.799 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:34.055 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:18:34.055 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:34.313 [2024-05-15 03:13:05.295029] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.313 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.570 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.570 "name": "Existed_Raid", 00:18:34.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.570 "strip_size_kb": 0, 00:18:34.570 "state": "configuring", 00:18:34.570 "raid_level": "raid1", 00:18:34.570 "superblock": false, 00:18:34.570 "num_base_bdevs": 4, 00:18:34.570 "num_base_bdevs_discovered": 3, 00:18:34.570 "num_base_bdevs_operational": 4, 00:18:34.570 "base_bdevs_list": [ 00:18:34.570 { 00:18:34.570 "name": null, 00:18:34.570 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:34.570 "is_configured": false, 00:18:34.570 "data_offset": 0, 00:18:34.570 "data_size": 65536 00:18:34.570 }, 00:18:34.570 { 00:18:34.570 "name": "BaseBdev2", 00:18:34.570 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:34.570 "is_configured": true, 00:18:34.570 "data_offset": 0, 00:18:34.570 "data_size": 65536 00:18:34.570 }, 00:18:34.570 { 00:18:34.570 "name": "BaseBdev3", 00:18:34.570 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:34.570 "is_configured": true, 00:18:34.570 "data_offset": 0, 00:18:34.570 "data_size": 65536 00:18:34.570 }, 00:18:34.570 { 00:18:34.570 "name": "BaseBdev4", 00:18:34.570 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:34.570 "is_configured": true, 00:18:34.570 "data_offset": 0, 00:18:34.570 "data_size": 65536 00:18:34.570 } 00:18:34.570 ] 00:18:34.570 }' 00:18:34.570 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.570 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.149 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.149 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.406 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:18:35.406 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.406 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:35.663 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0e4e0137-2912-4d3e-9fcc-bec9399a47a6 00:18:35.920 [2024-05-15 03:13:06.946846] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:35.921 [2024-05-15 03:13:06.946897] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x21d5c80 00:18:35.921 [2024-05-15 03:13:06.946904] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:35.921 [2024-05-15 03:13:06.947103] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21d6dd0 00:18:35.921 [2024-05-15 03:13:06.947236] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21d5c80 00:18:35.921 [2024-05-15 03:13:06.947244] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21d5c80 00:18:35.921 [2024-05-15 03:13:06.947411] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.921 NewBaseBdev 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:35.921 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.178 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:36.436 [ 00:18:36.436 { 00:18:36.436 "name": "NewBaseBdev", 00:18:36.436 "aliases": [ 00:18:36.436 "0e4e0137-2912-4d3e-9fcc-bec9399a47a6" 00:18:36.436 ], 00:18:36.436 "product_name": "Malloc disk", 00:18:36.436 "block_size": 512, 00:18:36.436 "num_blocks": 65536, 00:18:36.436 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:36.436 "assigned_rate_limits": { 00:18:36.436 "rw_ios_per_sec": 0, 00:18:36.436 "rw_mbytes_per_sec": 0, 00:18:36.436 "r_mbytes_per_sec": 0, 00:18:36.436 "w_mbytes_per_sec": 0 00:18:36.436 }, 00:18:36.436 "claimed": true, 00:18:36.436 "claim_type": "exclusive_write", 00:18:36.436 "zoned": false, 00:18:36.436 "supported_io_types": { 00:18:36.436 "read": true, 00:18:36.436 "write": true, 00:18:36.436 "unmap": true, 00:18:36.436 "write_zeroes": true, 00:18:36.436 "flush": true, 00:18:36.436 "reset": true, 00:18:36.436 "compare": false, 00:18:36.436 "compare_and_write": false, 00:18:36.436 "abort": true, 00:18:36.436 "nvme_admin": false, 00:18:36.436 "nvme_io": false 00:18:36.436 }, 00:18:36.436 "memory_domains": [ 00:18:36.436 { 00:18:36.436 "dma_device_id": "system", 00:18:36.436 "dma_device_type": 1 00:18:36.436 }, 00:18:36.436 { 00:18:36.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.436 "dma_device_type": 2 00:18:36.436 } 00:18:36.436 ], 00:18:36.436 "driver_specific": {} 00:18:36.436 } 00:18:36.436 ] 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.436 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.694 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.694 "name": "Existed_Raid", 00:18:36.694 "uuid": "0492b2c0-a369-4666-8805-9dc423473769", 00:18:36.694 "strip_size_kb": 0, 00:18:36.694 "state": "online", 00:18:36.694 "raid_level": "raid1", 00:18:36.694 "superblock": false, 00:18:36.694 "num_base_bdevs": 4, 00:18:36.694 "num_base_bdevs_discovered": 4, 00:18:36.694 "num_base_bdevs_operational": 4, 00:18:36.694 "base_bdevs_list": [ 00:18:36.694 { 00:18:36.694 "name": "NewBaseBdev", 00:18:36.694 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:36.694 "is_configured": true, 00:18:36.694 "data_offset": 0, 00:18:36.694 "data_size": 65536 00:18:36.694 }, 00:18:36.694 { 00:18:36.694 "name": "BaseBdev2", 00:18:36.694 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:36.694 "is_configured": true, 00:18:36.694 "data_offset": 0, 00:18:36.694 "data_size": 65536 00:18:36.694 }, 00:18:36.694 { 00:18:36.694 "name": "BaseBdev3", 00:18:36.694 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:36.694 "is_configured": true, 00:18:36.694 "data_offset": 0, 00:18:36.694 "data_size": 65536 00:18:36.694 }, 00:18:36.694 { 00:18:36.694 "name": "BaseBdev4", 00:18:36.694 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:36.694 "is_configured": true, 00:18:36.694 "data_offset": 0, 00:18:36.694 "data_size": 65536 00:18:36.694 } 00:18:36.694 ] 00:18:36.694 }' 00:18:36.694 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.694 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:37.259 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:37.517 [2024-05-15 03:13:08.583558] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:37.517 "name": "Existed_Raid", 00:18:37.517 "aliases": [ 00:18:37.517 "0492b2c0-a369-4666-8805-9dc423473769" 00:18:37.517 ], 00:18:37.517 "product_name": "Raid Volume", 00:18:37.517 "block_size": 512, 00:18:37.517 "num_blocks": 65536, 00:18:37.517 "uuid": "0492b2c0-a369-4666-8805-9dc423473769", 00:18:37.517 "assigned_rate_limits": { 00:18:37.517 "rw_ios_per_sec": 0, 00:18:37.517 "rw_mbytes_per_sec": 0, 00:18:37.517 "r_mbytes_per_sec": 0, 00:18:37.517 "w_mbytes_per_sec": 0 00:18:37.517 }, 00:18:37.517 "claimed": false, 00:18:37.517 "zoned": false, 00:18:37.517 "supported_io_types": { 00:18:37.517 "read": true, 00:18:37.517 "write": true, 00:18:37.517 "unmap": false, 00:18:37.517 "write_zeroes": true, 00:18:37.517 "flush": false, 00:18:37.517 "reset": true, 00:18:37.517 "compare": false, 00:18:37.517 "compare_and_write": false, 00:18:37.517 "abort": false, 00:18:37.517 "nvme_admin": false, 00:18:37.517 "nvme_io": false 00:18:37.517 }, 00:18:37.517 "memory_domains": [ 00:18:37.517 { 00:18:37.517 "dma_device_id": "system", 00:18:37.517 "dma_device_type": 1 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.517 "dma_device_type": 2 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "system", 00:18:37.517 "dma_device_type": 1 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.517 "dma_device_type": 2 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "system", 00:18:37.517 "dma_device_type": 1 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.517 "dma_device_type": 2 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "system", 00:18:37.517 "dma_device_type": 1 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.517 "dma_device_type": 2 00:18:37.517 } 00:18:37.517 ], 00:18:37.517 "driver_specific": { 00:18:37.517 "raid": { 00:18:37.517 "uuid": "0492b2c0-a369-4666-8805-9dc423473769", 00:18:37.517 "strip_size_kb": 0, 00:18:37.517 "state": "online", 00:18:37.517 "raid_level": "raid1", 00:18:37.517 "superblock": false, 00:18:37.517 "num_base_bdevs": 4, 00:18:37.517 "num_base_bdevs_discovered": 4, 00:18:37.517 "num_base_bdevs_operational": 4, 00:18:37.517 "base_bdevs_list": [ 00:18:37.517 { 00:18:37.517 "name": "NewBaseBdev", 00:18:37.517 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:37.517 "is_configured": true, 00:18:37.517 "data_offset": 0, 00:18:37.517 "data_size": 65536 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "name": "BaseBdev2", 00:18:37.517 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:37.517 "is_configured": true, 00:18:37.517 "data_offset": 0, 00:18:37.517 "data_size": 65536 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "name": "BaseBdev3", 00:18:37.517 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:37.517 "is_configured": true, 00:18:37.517 "data_offset": 0, 00:18:37.517 "data_size": 65536 00:18:37.517 }, 00:18:37.517 { 00:18:37.517 "name": "BaseBdev4", 00:18:37.517 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:37.517 "is_configured": true, 00:18:37.517 "data_offset": 0, 00:18:37.517 "data_size": 65536 00:18:37.517 } 00:18:37.517 ] 00:18:37.517 } 00:18:37.517 } 00:18:37.517 }' 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:18:37.517 BaseBdev2 00:18:37.517 BaseBdev3 00:18:37.517 BaseBdev4' 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:37.517 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:37.775 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:37.775 "name": "NewBaseBdev", 00:18:37.775 "aliases": [ 00:18:37.775 "0e4e0137-2912-4d3e-9fcc-bec9399a47a6" 00:18:37.775 ], 00:18:37.775 "product_name": "Malloc disk", 00:18:37.775 "block_size": 512, 00:18:37.775 "num_blocks": 65536, 00:18:37.775 "uuid": "0e4e0137-2912-4d3e-9fcc-bec9399a47a6", 00:18:37.775 "assigned_rate_limits": { 00:18:37.775 "rw_ios_per_sec": 0, 00:18:37.775 "rw_mbytes_per_sec": 0, 00:18:37.775 "r_mbytes_per_sec": 0, 00:18:37.775 "w_mbytes_per_sec": 0 00:18:37.775 }, 00:18:37.775 "claimed": true, 00:18:37.775 "claim_type": "exclusive_write", 00:18:37.775 "zoned": false, 00:18:37.775 "supported_io_types": { 00:18:37.775 "read": true, 00:18:37.775 "write": true, 00:18:37.775 "unmap": true, 00:18:37.775 "write_zeroes": true, 00:18:37.775 "flush": true, 00:18:37.775 "reset": true, 00:18:37.775 "compare": false, 00:18:37.775 "compare_and_write": false, 00:18:37.775 "abort": true, 00:18:37.775 "nvme_admin": false, 00:18:37.775 "nvme_io": false 00:18:37.775 }, 00:18:37.775 "memory_domains": [ 00:18:37.775 { 00:18:37.775 "dma_device_id": "system", 00:18:37.775 "dma_device_type": 1 00:18:37.775 }, 00:18:37.775 { 00:18:37.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.775 "dma_device_type": 2 00:18:37.775 } 00:18:37.775 ], 00:18:37.775 "driver_specific": {} 00:18:37.775 }' 00:18:37.775 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.775 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.032 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:38.032 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.032 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.032 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.289 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:38.289 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:38.289 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:38.289 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:38.546 "name": "BaseBdev2", 00:18:38.546 "aliases": [ 00:18:38.546 "8fec9418-281c-406c-9aa6-9879aa71a5d5" 00:18:38.546 ], 00:18:38.546 "product_name": "Malloc disk", 00:18:38.546 "block_size": 512, 00:18:38.546 "num_blocks": 65536, 00:18:38.546 "uuid": "8fec9418-281c-406c-9aa6-9879aa71a5d5", 00:18:38.546 "assigned_rate_limits": { 00:18:38.546 "rw_ios_per_sec": 0, 00:18:38.546 "rw_mbytes_per_sec": 0, 00:18:38.546 "r_mbytes_per_sec": 0, 00:18:38.546 "w_mbytes_per_sec": 0 00:18:38.546 }, 00:18:38.546 "claimed": true, 00:18:38.546 "claim_type": "exclusive_write", 00:18:38.546 "zoned": false, 00:18:38.546 "supported_io_types": { 00:18:38.546 "read": true, 00:18:38.546 "write": true, 00:18:38.546 "unmap": true, 00:18:38.546 "write_zeroes": true, 00:18:38.546 "flush": true, 00:18:38.546 "reset": true, 00:18:38.546 "compare": false, 00:18:38.546 "compare_and_write": false, 00:18:38.546 "abort": true, 00:18:38.546 "nvme_admin": false, 00:18:38.546 "nvme_io": false 00:18:38.546 }, 00:18:38.546 "memory_domains": [ 00:18:38.546 { 00:18:38.546 "dma_device_id": "system", 00:18:38.546 "dma_device_type": 1 00:18:38.546 }, 00:18:38.546 { 00:18:38.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.546 "dma_device_type": 2 00:18:38.546 } 00:18:38.546 ], 00:18:38.546 "driver_specific": {} 00:18:38.546 }' 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.546 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:38.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:39.062 "name": "BaseBdev3", 00:18:39.062 "aliases": [ 00:18:39.062 "8aca00ee-3842-4d86-a817-cf5ef5f27168" 00:18:39.062 ], 00:18:39.062 "product_name": "Malloc disk", 00:18:39.062 "block_size": 512, 00:18:39.062 "num_blocks": 65536, 00:18:39.062 "uuid": "8aca00ee-3842-4d86-a817-cf5ef5f27168", 00:18:39.062 "assigned_rate_limits": { 00:18:39.062 "rw_ios_per_sec": 0, 00:18:39.062 "rw_mbytes_per_sec": 0, 00:18:39.062 "r_mbytes_per_sec": 0, 00:18:39.062 "w_mbytes_per_sec": 0 00:18:39.062 }, 00:18:39.062 "claimed": true, 00:18:39.062 "claim_type": "exclusive_write", 00:18:39.062 "zoned": false, 00:18:39.062 "supported_io_types": { 00:18:39.062 "read": true, 00:18:39.062 "write": true, 00:18:39.062 "unmap": true, 00:18:39.062 "write_zeroes": true, 00:18:39.062 "flush": true, 00:18:39.062 "reset": true, 00:18:39.062 "compare": false, 00:18:39.062 "compare_and_write": false, 00:18:39.062 "abort": true, 00:18:39.062 "nvme_admin": false, 00:18:39.062 "nvme_io": false 00:18:39.062 }, 00:18:39.062 "memory_domains": [ 00:18:39.062 { 00:18:39.062 "dma_device_id": "system", 00:18:39.062 "dma_device_type": 1 00:18:39.062 }, 00:18:39.062 { 00:18:39.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.062 "dma_device_type": 2 00:18:39.062 } 00:18:39.062 ], 00:18:39.062 "driver_specific": {} 00:18:39.062 }' 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:39.062 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:39.346 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:39.617 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:39.617 "name": "BaseBdev4", 00:18:39.617 "aliases": [ 00:18:39.617 "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa" 00:18:39.617 ], 00:18:39.617 "product_name": "Malloc disk", 00:18:39.617 "block_size": 512, 00:18:39.617 "num_blocks": 65536, 00:18:39.617 "uuid": "7cdcbeaa-b6c2-4016-883e-98ff7a7ddcaa", 00:18:39.617 "assigned_rate_limits": { 00:18:39.617 "rw_ios_per_sec": 0, 00:18:39.617 "rw_mbytes_per_sec": 0, 00:18:39.617 "r_mbytes_per_sec": 0, 00:18:39.617 "w_mbytes_per_sec": 0 00:18:39.617 }, 00:18:39.617 "claimed": true, 00:18:39.617 "claim_type": "exclusive_write", 00:18:39.617 "zoned": false, 00:18:39.617 "supported_io_types": { 00:18:39.617 "read": true, 00:18:39.617 "write": true, 00:18:39.617 "unmap": true, 00:18:39.617 "write_zeroes": true, 00:18:39.617 "flush": true, 00:18:39.617 "reset": true, 00:18:39.617 "compare": false, 00:18:39.617 "compare_and_write": false, 00:18:39.617 "abort": true, 00:18:39.617 "nvme_admin": false, 00:18:39.617 "nvme_io": false 00:18:39.617 }, 00:18:39.617 "memory_domains": [ 00:18:39.617 { 00:18:39.617 "dma_device_id": "system", 00:18:39.618 "dma_device_type": 1 00:18:39.618 }, 00:18:39.618 { 00:18:39.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.618 "dma_device_type": 2 00:18:39.618 } 00:18:39.618 ], 00:18:39.618 "driver_specific": {} 00:18:39.618 }' 00:18:39.618 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:39.618 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:39.618 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:39.618 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:39.618 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:39.875 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:40.132 [2024-05-15 03:13:11.230606] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.133 [2024-05-15 03:13:11.230633] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.133 [2024-05-15 03:13:11.230684] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.133 [2024-05-15 03:13:11.230979] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.133 [2024-05-15 03:13:11.230989] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21d5c80 name Existed_Raid, state offline 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 4133616 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 4133616 ']' 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 4133616 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4133616 00:18:40.133 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4133616' 00:18:40.391 killing process with pid 4133616 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 4133616 00:18:40.391 [2024-05-15 03:13:11.292842] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 4133616 00:18:40.391 [2024-05-15 03:13:11.326325] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:18:40.391 00:18:40.391 real 0m33.070s 00:18:40.391 user 1m2.106s 00:18:40.391 sys 0m4.540s 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.391 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.391 ************************************ 00:18:40.391 END TEST raid_state_function_test 00:18:40.391 ************************************ 00:18:40.649 03:13:11 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:40.649 03:13:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:40.649 03:13:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.649 03:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.649 ************************************ 00:18:40.649 START TEST raid_state_function_test_sb 00:18:40.649 ************************************ 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=4139716 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4139716' 00:18:40.650 Process raid pid: 4139716 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 4139716 /var/tmp/spdk-raid.sock 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4139716 ']' 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.650 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.650 [2024-05-15 03:13:11.672490] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:40.650 [2024-05-15 03:13:11.672526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.650 [2024-05-15 03:13:11.758467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.908 [2024-05-15 03:13:11.848225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.908 [2024-05-15 03:13:11.903836] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.908 [2024-05-15 03:13:11.903871] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.908 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.908 03:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:18:40.908 03:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:41.165 [2024-05-15 03:13:12.184593] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.165 [2024-05-15 03:13:12.184629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.165 [2024-05-15 03:13:12.184639] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.165 [2024-05-15 03:13:12.184648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.165 [2024-05-15 03:13:12.184655] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.165 [2024-05-15 03:13:12.184664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.165 [2024-05-15 03:13:12.184671] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.165 [2024-05-15 03:13:12.184679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.165 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.165 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.165 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.166 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.423 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.423 "name": "Existed_Raid", 00:18:41.423 "uuid": "9feccbae-3abe-4251-bf47-28f7bad43af0", 00:18:41.423 "strip_size_kb": 0, 00:18:41.423 "state": "configuring", 00:18:41.423 "raid_level": "raid1", 00:18:41.423 "superblock": true, 00:18:41.423 "num_base_bdevs": 4, 00:18:41.423 "num_base_bdevs_discovered": 0, 00:18:41.423 "num_base_bdevs_operational": 4, 00:18:41.423 "base_bdevs_list": [ 00:18:41.423 { 00:18:41.423 "name": "BaseBdev1", 00:18:41.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.423 "is_configured": false, 00:18:41.423 "data_offset": 0, 00:18:41.423 "data_size": 0 00:18:41.423 }, 00:18:41.423 { 00:18:41.423 "name": "BaseBdev2", 00:18:41.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.423 "is_configured": false, 00:18:41.423 "data_offset": 0, 00:18:41.423 "data_size": 0 00:18:41.423 }, 00:18:41.423 { 00:18:41.423 "name": "BaseBdev3", 00:18:41.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.423 "is_configured": false, 00:18:41.423 "data_offset": 0, 00:18:41.423 "data_size": 0 00:18:41.423 }, 00:18:41.423 { 00:18:41.423 "name": "BaseBdev4", 00:18:41.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.423 "is_configured": false, 00:18:41.423 "data_offset": 0, 00:18:41.423 "data_size": 0 00:18:41.423 } 00:18:41.423 ] 00:18:41.423 }' 00:18:41.423 03:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.423 03:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.988 03:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.246 [2024-05-15 03:13:13.303436] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.246 [2024-05-15 03:13:13.303459] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1255e00 name Existed_Raid, state configuring 00:18:42.246 03:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:42.503 [2024-05-15 03:13:13.556136] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.503 [2024-05-15 03:13:13.556158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.503 [2024-05-15 03:13:13.556166] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.503 [2024-05-15 03:13:13.556174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.503 [2024-05-15 03:13:13.556181] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.503 [2024-05-15 03:13:13.556189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.503 [2024-05-15 03:13:13.556196] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:42.503 [2024-05-15 03:13:13.556204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:42.503 03:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.760 [2024-05-15 03:13:13.818332] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.760 BaseBdev1 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:42.760 03:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.016 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.273 [ 00:18:43.273 { 00:18:43.273 "name": "BaseBdev1", 00:18:43.273 "aliases": [ 00:18:43.273 "0914a117-54e6-4c1d-968c-448585910918" 00:18:43.273 ], 00:18:43.273 "product_name": "Malloc disk", 00:18:43.273 "block_size": 512, 00:18:43.273 "num_blocks": 65536, 00:18:43.273 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:43.273 "assigned_rate_limits": { 00:18:43.273 "rw_ios_per_sec": 0, 00:18:43.273 "rw_mbytes_per_sec": 0, 00:18:43.273 "r_mbytes_per_sec": 0, 00:18:43.273 "w_mbytes_per_sec": 0 00:18:43.273 }, 00:18:43.273 "claimed": true, 00:18:43.273 "claim_type": "exclusive_write", 00:18:43.273 "zoned": false, 00:18:43.273 "supported_io_types": { 00:18:43.273 "read": true, 00:18:43.273 "write": true, 00:18:43.273 "unmap": true, 00:18:43.273 "write_zeroes": true, 00:18:43.273 "flush": true, 00:18:43.273 "reset": true, 00:18:43.273 "compare": false, 00:18:43.273 "compare_and_write": false, 00:18:43.273 "abort": true, 00:18:43.273 "nvme_admin": false, 00:18:43.273 "nvme_io": false 00:18:43.273 }, 00:18:43.273 "memory_domains": [ 00:18:43.273 { 00:18:43.273 "dma_device_id": "system", 00:18:43.273 "dma_device_type": 1 00:18:43.273 }, 00:18:43.273 { 00:18:43.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.273 "dma_device_type": 2 00:18:43.273 } 00:18:43.273 ], 00:18:43.273 "driver_specific": {} 00:18:43.274 } 00:18:43.274 ] 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.274 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.531 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.531 "name": "Existed_Raid", 00:18:43.531 "uuid": "4ce6db56-7e03-487d-ba96-c98f1dc49d00", 00:18:43.531 "strip_size_kb": 0, 00:18:43.531 "state": "configuring", 00:18:43.531 "raid_level": "raid1", 00:18:43.531 "superblock": true, 00:18:43.531 "num_base_bdevs": 4, 00:18:43.531 "num_base_bdevs_discovered": 1, 00:18:43.531 "num_base_bdevs_operational": 4, 00:18:43.531 "base_bdevs_list": [ 00:18:43.531 { 00:18:43.531 "name": "BaseBdev1", 00:18:43.531 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:43.531 "is_configured": true, 00:18:43.531 "data_offset": 2048, 00:18:43.531 "data_size": 63488 00:18:43.531 }, 00:18:43.531 { 00:18:43.531 "name": "BaseBdev2", 00:18:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.531 "is_configured": false, 00:18:43.531 "data_offset": 0, 00:18:43.531 "data_size": 0 00:18:43.531 }, 00:18:43.531 { 00:18:43.531 "name": "BaseBdev3", 00:18:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.531 "is_configured": false, 00:18:43.531 "data_offset": 0, 00:18:43.531 "data_size": 0 00:18:43.531 }, 00:18:43.531 { 00:18:43.531 "name": "BaseBdev4", 00:18:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.531 "is_configured": false, 00:18:43.531 "data_offset": 0, 00:18:43.531 "data_size": 0 00:18:43.531 } 00:18:43.531 ] 00:18:43.531 }' 00:18:43.531 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.531 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.353 [2024-05-15 03:13:15.462740] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.353 [2024-05-15 03:13:15.462781] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12560a0 name Existed_Raid, state configuring 00:18:44.353 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:44.611 [2024-05-15 03:13:15.719473] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.611 [2024-05-15 03:13:15.720991] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.611 [2024-05-15 03:13:15.721023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.611 [2024-05-15 03:13:15.721032] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.611 [2024-05-15 03:13:15.721040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:44.611 [2024-05-15 03:13:15.721047] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:44.611 [2024-05-15 03:13:15.721056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.611 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.869 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.869 "name": "Existed_Raid", 00:18:44.869 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:44.869 "strip_size_kb": 0, 00:18:44.869 "state": "configuring", 00:18:44.869 "raid_level": "raid1", 00:18:44.869 "superblock": true, 00:18:44.869 "num_base_bdevs": 4, 00:18:44.869 "num_base_bdevs_discovered": 1, 00:18:44.869 "num_base_bdevs_operational": 4, 00:18:44.869 "base_bdevs_list": [ 00:18:44.869 { 00:18:44.869 "name": "BaseBdev1", 00:18:44.869 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:44.869 "is_configured": true, 00:18:44.869 "data_offset": 2048, 00:18:44.869 "data_size": 63488 00:18:44.869 }, 00:18:44.869 { 00:18:44.869 "name": "BaseBdev2", 00:18:44.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.869 "is_configured": false, 00:18:44.869 "data_offset": 0, 00:18:44.869 "data_size": 0 00:18:44.869 }, 00:18:44.869 { 00:18:44.869 "name": "BaseBdev3", 00:18:44.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.869 "is_configured": false, 00:18:44.869 "data_offset": 0, 00:18:44.869 "data_size": 0 00:18:44.869 }, 00:18:44.869 { 00:18:44.869 "name": "BaseBdev4", 00:18:44.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.869 "is_configured": false, 00:18:44.869 "data_offset": 0, 00:18:44.869 "data_size": 0 00:18:44.869 } 00:18:44.869 ] 00:18:44.869 }' 00:18:44.869 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.869 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.801 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.802 [2024-05-15 03:13:16.853830] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.802 BaseBdev2 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:45.802 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.059 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:46.316 [ 00:18:46.316 { 00:18:46.316 "name": "BaseBdev2", 00:18:46.316 "aliases": [ 00:18:46.316 "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5" 00:18:46.316 ], 00:18:46.317 "product_name": "Malloc disk", 00:18:46.317 "block_size": 512, 00:18:46.317 "num_blocks": 65536, 00:18:46.317 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:46.317 "assigned_rate_limits": { 00:18:46.317 "rw_ios_per_sec": 0, 00:18:46.317 "rw_mbytes_per_sec": 0, 00:18:46.317 "r_mbytes_per_sec": 0, 00:18:46.317 "w_mbytes_per_sec": 0 00:18:46.317 }, 00:18:46.317 "claimed": true, 00:18:46.317 "claim_type": "exclusive_write", 00:18:46.317 "zoned": false, 00:18:46.317 "supported_io_types": { 00:18:46.317 "read": true, 00:18:46.317 "write": true, 00:18:46.317 "unmap": true, 00:18:46.317 "write_zeroes": true, 00:18:46.317 "flush": true, 00:18:46.317 "reset": true, 00:18:46.317 "compare": false, 00:18:46.317 "compare_and_write": false, 00:18:46.317 "abort": true, 00:18:46.317 "nvme_admin": false, 00:18:46.317 "nvme_io": false 00:18:46.317 }, 00:18:46.317 "memory_domains": [ 00:18:46.317 { 00:18:46.317 "dma_device_id": "system", 00:18:46.317 "dma_device_type": 1 00:18:46.317 }, 00:18:46.317 { 00:18:46.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.317 "dma_device_type": 2 00:18:46.317 } 00:18:46.317 ], 00:18:46.317 "driver_specific": {} 00:18:46.317 } 00:18:46.317 ] 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.317 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.575 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.575 "name": "Existed_Raid", 00:18:46.575 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:46.575 "strip_size_kb": 0, 00:18:46.575 "state": "configuring", 00:18:46.575 "raid_level": "raid1", 00:18:46.575 "superblock": true, 00:18:46.575 "num_base_bdevs": 4, 00:18:46.575 "num_base_bdevs_discovered": 2, 00:18:46.575 "num_base_bdevs_operational": 4, 00:18:46.575 "base_bdevs_list": [ 00:18:46.575 { 00:18:46.575 "name": "BaseBdev1", 00:18:46.575 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:46.575 "is_configured": true, 00:18:46.575 "data_offset": 2048, 00:18:46.575 "data_size": 63488 00:18:46.575 }, 00:18:46.575 { 00:18:46.575 "name": "BaseBdev2", 00:18:46.575 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:46.575 "is_configured": true, 00:18:46.575 "data_offset": 2048, 00:18:46.575 "data_size": 63488 00:18:46.575 }, 00:18:46.575 { 00:18:46.575 "name": "BaseBdev3", 00:18:46.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.575 "is_configured": false, 00:18:46.575 "data_offset": 0, 00:18:46.575 "data_size": 0 00:18:46.575 }, 00:18:46.575 { 00:18:46.575 "name": "BaseBdev4", 00:18:46.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.575 "is_configured": false, 00:18:46.575 "data_offset": 0, 00:18:46.575 "data_size": 0 00:18:46.575 } 00:18:46.575 ] 00:18:46.575 }' 00:18:46.575 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.575 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.141 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:47.399 [2024-05-15 03:13:18.509544] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.399 BaseBdev3 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:47.399 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.656 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:47.914 [ 00:18:47.914 { 00:18:47.914 "name": "BaseBdev3", 00:18:47.914 "aliases": [ 00:18:47.914 "a2eb0740-5bcf-4824-b411-a4300a0f5a8a" 00:18:47.914 ], 00:18:47.914 "product_name": "Malloc disk", 00:18:47.914 "block_size": 512, 00:18:47.914 "num_blocks": 65536, 00:18:47.914 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:47.914 "assigned_rate_limits": { 00:18:47.914 "rw_ios_per_sec": 0, 00:18:47.914 "rw_mbytes_per_sec": 0, 00:18:47.914 "r_mbytes_per_sec": 0, 00:18:47.914 "w_mbytes_per_sec": 0 00:18:47.914 }, 00:18:47.914 "claimed": true, 00:18:47.914 "claim_type": "exclusive_write", 00:18:47.914 "zoned": false, 00:18:47.914 "supported_io_types": { 00:18:47.914 "read": true, 00:18:47.914 "write": true, 00:18:47.914 "unmap": true, 00:18:47.914 "write_zeroes": true, 00:18:47.914 "flush": true, 00:18:47.914 "reset": true, 00:18:47.914 "compare": false, 00:18:47.914 "compare_and_write": false, 00:18:47.914 "abort": true, 00:18:47.914 "nvme_admin": false, 00:18:47.914 "nvme_io": false 00:18:47.914 }, 00:18:47.914 "memory_domains": [ 00:18:47.914 { 00:18:47.914 "dma_device_id": "system", 00:18:47.914 "dma_device_type": 1 00:18:47.914 }, 00:18:47.914 { 00:18:47.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.914 "dma_device_type": 2 00:18:47.914 } 00:18:47.914 ], 00:18:47.914 "driver_specific": {} 00:18:47.914 } 00:18:47.914 ] 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.914 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.172 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.172 "name": "Existed_Raid", 00:18:48.172 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:48.172 "strip_size_kb": 0, 00:18:48.172 "state": "configuring", 00:18:48.172 "raid_level": "raid1", 00:18:48.172 "superblock": true, 00:18:48.172 "num_base_bdevs": 4, 00:18:48.172 "num_base_bdevs_discovered": 3, 00:18:48.172 "num_base_bdevs_operational": 4, 00:18:48.172 "base_bdevs_list": [ 00:18:48.172 { 00:18:48.172 "name": "BaseBdev1", 00:18:48.172 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:48.172 "is_configured": true, 00:18:48.172 "data_offset": 2048, 00:18:48.172 "data_size": 63488 00:18:48.172 }, 00:18:48.172 { 00:18:48.172 "name": "BaseBdev2", 00:18:48.172 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:48.172 "is_configured": true, 00:18:48.172 "data_offset": 2048, 00:18:48.172 "data_size": 63488 00:18:48.172 }, 00:18:48.172 { 00:18:48.172 "name": "BaseBdev3", 00:18:48.172 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:48.172 "is_configured": true, 00:18:48.172 "data_offset": 2048, 00:18:48.172 "data_size": 63488 00:18:48.172 }, 00:18:48.172 { 00:18:48.172 "name": "BaseBdev4", 00:18:48.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.172 "is_configured": false, 00:18:48.172 "data_offset": 0, 00:18:48.172 "data_size": 0 00:18:48.172 } 00:18:48.172 ] 00:18:48.172 }' 00:18:48.172 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.172 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.737 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:48.994 [2024-05-15 03:13:20.133296] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:48.994 [2024-05-15 03:13:20.133473] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1255670 00:18:48.994 [2024-05-15 03:13:20.133486] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:48.994 [2024-05-15 03:13:20.133676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1257a30 00:18:48.994 [2024-05-15 03:13:20.133812] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1255670 00:18:48.994 [2024-05-15 03:13:20.133821] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1255670 00:18:48.994 [2024-05-15 03:13:20.133926] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.994 BaseBdev4 00:18:48.994 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:49.251 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:49.508 [ 00:18:49.508 { 00:18:49.508 "name": "BaseBdev4", 00:18:49.508 "aliases": [ 00:18:49.508 "413f163d-acf6-4c4b-8328-d64317c862d3" 00:18:49.508 ], 00:18:49.508 "product_name": "Malloc disk", 00:18:49.508 "block_size": 512, 00:18:49.508 "num_blocks": 65536, 00:18:49.508 "uuid": "413f163d-acf6-4c4b-8328-d64317c862d3", 00:18:49.508 "assigned_rate_limits": { 00:18:49.508 "rw_ios_per_sec": 0, 00:18:49.508 "rw_mbytes_per_sec": 0, 00:18:49.508 "r_mbytes_per_sec": 0, 00:18:49.508 "w_mbytes_per_sec": 0 00:18:49.508 }, 00:18:49.508 "claimed": true, 00:18:49.508 "claim_type": "exclusive_write", 00:18:49.508 "zoned": false, 00:18:49.508 "supported_io_types": { 00:18:49.508 "read": true, 00:18:49.508 "write": true, 00:18:49.508 "unmap": true, 00:18:49.508 "write_zeroes": true, 00:18:49.508 "flush": true, 00:18:49.508 "reset": true, 00:18:49.508 "compare": false, 00:18:49.508 "compare_and_write": false, 00:18:49.508 "abort": true, 00:18:49.508 "nvme_admin": false, 00:18:49.508 "nvme_io": false 00:18:49.508 }, 00:18:49.508 "memory_domains": [ 00:18:49.508 { 00:18:49.508 "dma_device_id": "system", 00:18:49.508 "dma_device_type": 1 00:18:49.508 }, 00:18:49.508 { 00:18:49.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.508 "dma_device_type": 2 00:18:49.508 } 00:18:49.508 ], 00:18:49.508 "driver_specific": {} 00:18:49.508 } 00:18:49.508 ] 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.508 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.509 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.509 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.509 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.509 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.509 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.766 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.766 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.766 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.766 "name": "Existed_Raid", 00:18:49.766 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:49.766 "strip_size_kb": 0, 00:18:49.766 "state": "online", 00:18:49.766 "raid_level": "raid1", 00:18:49.766 "superblock": true, 00:18:49.766 "num_base_bdevs": 4, 00:18:49.766 "num_base_bdevs_discovered": 4, 00:18:49.766 "num_base_bdevs_operational": 4, 00:18:49.766 "base_bdevs_list": [ 00:18:49.766 { 00:18:49.766 "name": "BaseBdev1", 00:18:49.766 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:49.766 "is_configured": true, 00:18:49.766 "data_offset": 2048, 00:18:49.766 "data_size": 63488 00:18:49.766 }, 00:18:49.766 { 00:18:49.766 "name": "BaseBdev2", 00:18:49.766 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:49.766 "is_configured": true, 00:18:49.766 "data_offset": 2048, 00:18:49.766 "data_size": 63488 00:18:49.766 }, 00:18:49.766 { 00:18:49.766 "name": "BaseBdev3", 00:18:49.766 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:49.766 "is_configured": true, 00:18:49.766 "data_offset": 2048, 00:18:49.766 "data_size": 63488 00:18:49.766 }, 00:18:49.766 { 00:18:49.766 "name": "BaseBdev4", 00:18:49.766 "uuid": "413f163d-acf6-4c4b-8328-d64317c862d3", 00:18:49.766 "is_configured": true, 00:18:49.766 "data_offset": 2048, 00:18:49.766 "data_size": 63488 00:18:49.766 } 00:18:49.766 ] 00:18:49.766 }' 00:18:49.766 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.766 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:50.700 [2024-05-15 03:13:21.778029] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:50.700 "name": "Existed_Raid", 00:18:50.700 "aliases": [ 00:18:50.700 "6ae60670-a494-452d-abf1-6ec27baa9616" 00:18:50.700 ], 00:18:50.700 "product_name": "Raid Volume", 00:18:50.700 "block_size": 512, 00:18:50.700 "num_blocks": 63488, 00:18:50.700 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:50.700 "assigned_rate_limits": { 00:18:50.700 "rw_ios_per_sec": 0, 00:18:50.700 "rw_mbytes_per_sec": 0, 00:18:50.700 "r_mbytes_per_sec": 0, 00:18:50.700 "w_mbytes_per_sec": 0 00:18:50.700 }, 00:18:50.700 "claimed": false, 00:18:50.700 "zoned": false, 00:18:50.700 "supported_io_types": { 00:18:50.700 "read": true, 00:18:50.700 "write": true, 00:18:50.700 "unmap": false, 00:18:50.700 "write_zeroes": true, 00:18:50.700 "flush": false, 00:18:50.700 "reset": true, 00:18:50.700 "compare": false, 00:18:50.700 "compare_and_write": false, 00:18:50.700 "abort": false, 00:18:50.700 "nvme_admin": false, 00:18:50.700 "nvme_io": false 00:18:50.700 }, 00:18:50.700 "memory_domains": [ 00:18:50.700 { 00:18:50.700 "dma_device_id": "system", 00:18:50.700 "dma_device_type": 1 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.700 "dma_device_type": 2 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "system", 00:18:50.700 "dma_device_type": 1 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.700 "dma_device_type": 2 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "system", 00:18:50.700 "dma_device_type": 1 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.700 "dma_device_type": 2 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "system", 00:18:50.700 "dma_device_type": 1 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.700 "dma_device_type": 2 00:18:50.700 } 00:18:50.700 ], 00:18:50.700 "driver_specific": { 00:18:50.700 "raid": { 00:18:50.700 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:50.700 "strip_size_kb": 0, 00:18:50.700 "state": "online", 00:18:50.700 "raid_level": "raid1", 00:18:50.700 "superblock": true, 00:18:50.700 "num_base_bdevs": 4, 00:18:50.700 "num_base_bdevs_discovered": 4, 00:18:50.700 "num_base_bdevs_operational": 4, 00:18:50.700 "base_bdevs_list": [ 00:18:50.700 { 00:18:50.700 "name": "BaseBdev1", 00:18:50.700 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:50.700 "is_configured": true, 00:18:50.700 "data_offset": 2048, 00:18:50.700 "data_size": 63488 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "name": "BaseBdev2", 00:18:50.700 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:50.700 "is_configured": true, 00:18:50.700 "data_offset": 2048, 00:18:50.700 "data_size": 63488 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "name": "BaseBdev3", 00:18:50.700 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:50.700 "is_configured": true, 00:18:50.700 "data_offset": 2048, 00:18:50.700 "data_size": 63488 00:18:50.700 }, 00:18:50.700 { 00:18:50.700 "name": "BaseBdev4", 00:18:50.700 "uuid": "413f163d-acf6-4c4b-8328-d64317c862d3", 00:18:50.700 "is_configured": true, 00:18:50.700 "data_offset": 2048, 00:18:50.700 "data_size": 63488 00:18:50.700 } 00:18:50.700 ] 00:18:50.700 } 00:18:50.700 } 00:18:50.700 }' 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:18:50.700 BaseBdev2 00:18:50.700 BaseBdev3 00:18:50.700 BaseBdev4' 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:50.700 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:50.958 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:50.958 "name": "BaseBdev1", 00:18:50.958 "aliases": [ 00:18:50.958 "0914a117-54e6-4c1d-968c-448585910918" 00:18:50.958 ], 00:18:50.958 "product_name": "Malloc disk", 00:18:50.958 "block_size": 512, 00:18:50.958 "num_blocks": 65536, 00:18:50.958 "uuid": "0914a117-54e6-4c1d-968c-448585910918", 00:18:50.958 "assigned_rate_limits": { 00:18:50.958 "rw_ios_per_sec": 0, 00:18:50.958 "rw_mbytes_per_sec": 0, 00:18:50.958 "r_mbytes_per_sec": 0, 00:18:50.958 "w_mbytes_per_sec": 0 00:18:50.958 }, 00:18:50.958 "claimed": true, 00:18:50.958 "claim_type": "exclusive_write", 00:18:50.958 "zoned": false, 00:18:50.958 "supported_io_types": { 00:18:50.958 "read": true, 00:18:50.958 "write": true, 00:18:50.958 "unmap": true, 00:18:50.958 "write_zeroes": true, 00:18:50.958 "flush": true, 00:18:50.958 "reset": true, 00:18:50.958 "compare": false, 00:18:50.958 "compare_and_write": false, 00:18:50.958 "abort": true, 00:18:50.958 "nvme_admin": false, 00:18:50.958 "nvme_io": false 00:18:50.958 }, 00:18:50.958 "memory_domains": [ 00:18:50.958 { 00:18:50.958 "dma_device_id": "system", 00:18:50.958 "dma_device_type": 1 00:18:50.958 }, 00:18:50.958 { 00:18:50.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.958 "dma_device_type": 2 00:18:50.958 } 00:18:50.958 ], 00:18:50.958 "driver_specific": {} 00:18:50.958 }' 00:18:50.958 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.215 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:51.473 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:51.732 "name": "BaseBdev2", 00:18:51.732 "aliases": [ 00:18:51.732 "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5" 00:18:51.732 ], 00:18:51.732 "product_name": "Malloc disk", 00:18:51.732 "block_size": 512, 00:18:51.732 "num_blocks": 65536, 00:18:51.732 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:51.732 "assigned_rate_limits": { 00:18:51.732 "rw_ios_per_sec": 0, 00:18:51.732 "rw_mbytes_per_sec": 0, 00:18:51.732 "r_mbytes_per_sec": 0, 00:18:51.732 "w_mbytes_per_sec": 0 00:18:51.732 }, 00:18:51.732 "claimed": true, 00:18:51.732 "claim_type": "exclusive_write", 00:18:51.732 "zoned": false, 00:18:51.732 "supported_io_types": { 00:18:51.732 "read": true, 00:18:51.732 "write": true, 00:18:51.732 "unmap": true, 00:18:51.732 "write_zeroes": true, 00:18:51.732 "flush": true, 00:18:51.732 "reset": true, 00:18:51.732 "compare": false, 00:18:51.732 "compare_and_write": false, 00:18:51.732 "abort": true, 00:18:51.732 "nvme_admin": false, 00:18:51.732 "nvme_io": false 00:18:51.732 }, 00:18:51.732 "memory_domains": [ 00:18:51.732 { 00:18:51.732 "dma_device_id": "system", 00:18:51.732 "dma_device_type": 1 00:18:51.732 }, 00:18:51.732 { 00:18:51.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.732 "dma_device_type": 2 00:18:51.732 } 00:18:51.732 ], 00:18:51.732 "driver_specific": {} 00:18:51.732 }' 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.732 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.990 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.990 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.990 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.990 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.990 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.990 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.990 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:51.990 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:51.990 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:51.990 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:52.248 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:52.248 "name": "BaseBdev3", 00:18:52.248 "aliases": [ 00:18:52.248 "a2eb0740-5bcf-4824-b411-a4300a0f5a8a" 00:18:52.248 ], 00:18:52.248 "product_name": "Malloc disk", 00:18:52.248 "block_size": 512, 00:18:52.249 "num_blocks": 65536, 00:18:52.249 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:52.249 "assigned_rate_limits": { 00:18:52.249 "rw_ios_per_sec": 0, 00:18:52.249 "rw_mbytes_per_sec": 0, 00:18:52.249 "r_mbytes_per_sec": 0, 00:18:52.249 "w_mbytes_per_sec": 0 00:18:52.249 }, 00:18:52.249 "claimed": true, 00:18:52.249 "claim_type": "exclusive_write", 00:18:52.249 "zoned": false, 00:18:52.249 "supported_io_types": { 00:18:52.249 "read": true, 00:18:52.249 "write": true, 00:18:52.249 "unmap": true, 00:18:52.249 "write_zeroes": true, 00:18:52.249 "flush": true, 00:18:52.249 "reset": true, 00:18:52.249 "compare": false, 00:18:52.249 "compare_and_write": false, 00:18:52.249 "abort": true, 00:18:52.249 "nvme_admin": false, 00:18:52.249 "nvme_io": false 00:18:52.249 }, 00:18:52.249 "memory_domains": [ 00:18:52.249 { 00:18:52.249 "dma_device_id": "system", 00:18:52.249 "dma_device_type": 1 00:18:52.249 }, 00:18:52.249 { 00:18:52.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.249 "dma_device_type": 2 00:18:52.249 } 00:18:52.249 ], 00:18:52.249 "driver_specific": {} 00:18:52.249 }' 00:18:52.249 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:52.249 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.519 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:52.520 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:52.779 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:52.779 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:52.779 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:52.779 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:53.036 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:53.036 "name": "BaseBdev4", 00:18:53.036 "aliases": [ 00:18:53.036 "413f163d-acf6-4c4b-8328-d64317c862d3" 00:18:53.036 ], 00:18:53.036 "product_name": "Malloc disk", 00:18:53.036 "block_size": 512, 00:18:53.036 "num_blocks": 65536, 00:18:53.036 "uuid": "413f163d-acf6-4c4b-8328-d64317c862d3", 00:18:53.036 "assigned_rate_limits": { 00:18:53.036 "rw_ios_per_sec": 0, 00:18:53.036 "rw_mbytes_per_sec": 0, 00:18:53.036 "r_mbytes_per_sec": 0, 00:18:53.036 "w_mbytes_per_sec": 0 00:18:53.036 }, 00:18:53.036 "claimed": true, 00:18:53.036 "claim_type": "exclusive_write", 00:18:53.036 "zoned": false, 00:18:53.036 "supported_io_types": { 00:18:53.036 "read": true, 00:18:53.036 "write": true, 00:18:53.036 "unmap": true, 00:18:53.036 "write_zeroes": true, 00:18:53.036 "flush": true, 00:18:53.036 "reset": true, 00:18:53.036 "compare": false, 00:18:53.036 "compare_and_write": false, 00:18:53.036 "abort": true, 00:18:53.036 "nvme_admin": false, 00:18:53.036 "nvme_io": false 00:18:53.036 }, 00:18:53.036 "memory_domains": [ 00:18:53.036 { 00:18:53.036 "dma_device_id": "system", 00:18:53.036 "dma_device_type": 1 00:18:53.036 }, 00:18:53.036 { 00:18:53.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.036 "dma_device_type": 2 00:18:53.036 } 00:18:53.036 ], 00:18:53.036 "driver_specific": {} 00:18:53.036 }' 00:18:53.036 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:53.036 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:53.294 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:53.294 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:53.294 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:53.294 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:53.294 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:53.578 [2024-05-15 03:13:24.561226] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.578 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.847 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.847 "name": "Existed_Raid", 00:18:53.847 "uuid": "6ae60670-a494-452d-abf1-6ec27baa9616", 00:18:53.847 "strip_size_kb": 0, 00:18:53.847 "state": "online", 00:18:53.847 "raid_level": "raid1", 00:18:53.847 "superblock": true, 00:18:53.847 "num_base_bdevs": 4, 00:18:53.847 "num_base_bdevs_discovered": 3, 00:18:53.847 "num_base_bdevs_operational": 3, 00:18:53.847 "base_bdevs_list": [ 00:18:53.847 { 00:18:53.847 "name": null, 00:18:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.847 "is_configured": false, 00:18:53.847 "data_offset": 2048, 00:18:53.847 "data_size": 63488 00:18:53.847 }, 00:18:53.847 { 00:18:53.847 "name": "BaseBdev2", 00:18:53.847 "uuid": "c2918012-4ba9-4a5d-98d8-e5567f2ee0b5", 00:18:53.847 "is_configured": true, 00:18:53.847 "data_offset": 2048, 00:18:53.847 "data_size": 63488 00:18:53.847 }, 00:18:53.847 { 00:18:53.847 "name": "BaseBdev3", 00:18:53.847 "uuid": "a2eb0740-5bcf-4824-b411-a4300a0f5a8a", 00:18:53.847 "is_configured": true, 00:18:53.847 "data_offset": 2048, 00:18:53.847 "data_size": 63488 00:18:53.847 }, 00:18:53.847 { 00:18:53.847 "name": "BaseBdev4", 00:18:53.847 "uuid": "413f163d-acf6-4c4b-8328-d64317c862d3", 00:18:53.847 "is_configured": true, 00:18:53.847 "data_offset": 2048, 00:18:53.847 "data_size": 63488 00:18:53.847 } 00:18:53.847 ] 00:18:53.847 }' 00:18:53.847 03:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.847 03:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.412 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:54.412 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.412 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.412 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:54.671 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:54.671 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.671 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:54.929 [2024-05-15 03:13:25.833891] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.929 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:54.929 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.929 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.929 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:55.187 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:55.187 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.187 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:55.445 [2024-05-15 03:13:26.357450] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:55.445 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:55.445 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.445 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.445 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:55.703 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:55.703 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.703 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:55.961 [2024-05-15 03:13:26.873469] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:55.961 [2024-05-15 03:13:26.873541] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.961 [2024-05-15 03:13:26.884120] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.961 [2024-05-15 03:13:26.884181] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.962 [2024-05-15 03:13:26.884192] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1255670 name Existed_Raid, state offline 00:18:55.962 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:55.962 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.962 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.962 03:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:56.220 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:56.477 BaseBdev2 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:56.477 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.735 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.735 [ 00:18:56.735 { 00:18:56.735 "name": "BaseBdev2", 00:18:56.735 "aliases": [ 00:18:56.735 "3578aa54-7a8e-4186-83ea-50f4313cdf34" 00:18:56.735 ], 00:18:56.735 "product_name": "Malloc disk", 00:18:56.735 "block_size": 512, 00:18:56.735 "num_blocks": 65536, 00:18:56.735 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:18:56.735 "assigned_rate_limits": { 00:18:56.735 "rw_ios_per_sec": 0, 00:18:56.735 "rw_mbytes_per_sec": 0, 00:18:56.735 "r_mbytes_per_sec": 0, 00:18:56.735 "w_mbytes_per_sec": 0 00:18:56.735 }, 00:18:56.735 "claimed": false, 00:18:56.735 "zoned": false, 00:18:56.735 "supported_io_types": { 00:18:56.735 "read": true, 00:18:56.735 "write": true, 00:18:56.735 "unmap": true, 00:18:56.735 "write_zeroes": true, 00:18:56.735 "flush": true, 00:18:56.735 "reset": true, 00:18:56.735 "compare": false, 00:18:56.735 "compare_and_write": false, 00:18:56.735 "abort": true, 00:18:56.735 "nvme_admin": false, 00:18:56.735 "nvme_io": false 00:18:56.735 }, 00:18:56.735 "memory_domains": [ 00:18:56.735 { 00:18:56.735 "dma_device_id": "system", 00:18:56.735 "dma_device_type": 1 00:18:56.735 }, 00:18:56.735 { 00:18:56.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.735 "dma_device_type": 2 00:18:56.735 } 00:18:56.735 ], 00:18:56.735 "driver_specific": {} 00:18:56.735 } 00:18:56.735 ] 00:18:56.994 03:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:56.994 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:56.994 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:56.994 03:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.994 BaseBdev3 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:56.994 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.561 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:57.561 [ 00:18:57.561 { 00:18:57.561 "name": "BaseBdev3", 00:18:57.561 "aliases": [ 00:18:57.561 "0ee4b63c-144c-448e-80fb-6b8df4469231" 00:18:57.561 ], 00:18:57.561 "product_name": "Malloc disk", 00:18:57.561 "block_size": 512, 00:18:57.561 "num_blocks": 65536, 00:18:57.561 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:18:57.561 "assigned_rate_limits": { 00:18:57.561 "rw_ios_per_sec": 0, 00:18:57.561 "rw_mbytes_per_sec": 0, 00:18:57.561 "r_mbytes_per_sec": 0, 00:18:57.561 "w_mbytes_per_sec": 0 00:18:57.561 }, 00:18:57.561 "claimed": false, 00:18:57.561 "zoned": false, 00:18:57.561 "supported_io_types": { 00:18:57.561 "read": true, 00:18:57.561 "write": true, 00:18:57.561 "unmap": true, 00:18:57.561 "write_zeroes": true, 00:18:57.561 "flush": true, 00:18:57.561 "reset": true, 00:18:57.561 "compare": false, 00:18:57.561 "compare_and_write": false, 00:18:57.561 "abort": true, 00:18:57.561 "nvme_admin": false, 00:18:57.561 "nvme_io": false 00:18:57.561 }, 00:18:57.561 "memory_domains": [ 00:18:57.561 { 00:18:57.561 "dma_device_id": "system", 00:18:57.561 "dma_device_type": 1 00:18:57.561 }, 00:18:57.561 { 00:18:57.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.561 "dma_device_type": 2 00:18:57.561 } 00:18:57.561 ], 00:18:57.561 "driver_specific": {} 00:18:57.561 } 00:18:57.561 ] 00:18:57.561 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:57.561 03:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:57.561 03:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:57.561 03:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:57.820 BaseBdev4 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:57.820 03:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.077 03:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:58.334 [ 00:18:58.334 { 00:18:58.334 "name": "BaseBdev4", 00:18:58.334 "aliases": [ 00:18:58.334 "c2a88669-741f-4613-841c-52098979fad1" 00:18:58.334 ], 00:18:58.334 "product_name": "Malloc disk", 00:18:58.334 "block_size": 512, 00:18:58.334 "num_blocks": 65536, 00:18:58.334 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:18:58.334 "assigned_rate_limits": { 00:18:58.334 "rw_ios_per_sec": 0, 00:18:58.334 "rw_mbytes_per_sec": 0, 00:18:58.334 "r_mbytes_per_sec": 0, 00:18:58.334 "w_mbytes_per_sec": 0 00:18:58.334 }, 00:18:58.334 "claimed": false, 00:18:58.334 "zoned": false, 00:18:58.334 "supported_io_types": { 00:18:58.334 "read": true, 00:18:58.334 "write": true, 00:18:58.334 "unmap": true, 00:18:58.334 "write_zeroes": true, 00:18:58.334 "flush": true, 00:18:58.334 "reset": true, 00:18:58.334 "compare": false, 00:18:58.334 "compare_and_write": false, 00:18:58.334 "abort": true, 00:18:58.334 "nvme_admin": false, 00:18:58.334 "nvme_io": false 00:18:58.334 }, 00:18:58.334 "memory_domains": [ 00:18:58.334 { 00:18:58.334 "dma_device_id": "system", 00:18:58.334 "dma_device_type": 1 00:18:58.334 }, 00:18:58.334 { 00:18:58.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.334 "dma_device_type": 2 00:18:58.334 } 00:18:58.334 ], 00:18:58.334 "driver_specific": {} 00:18:58.334 } 00:18:58.334 ] 00:18:58.334 03:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:58.334 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:58.334 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:58.334 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:58.591 [2024-05-15 03:13:29.648680] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.591 [2024-05-15 03:13:29.648717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.591 [2024-05-15 03:13:29.648734] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.591 [2024-05-15 03:13:29.650124] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.591 [2024-05-15 03:13:29.650167] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.591 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.852 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.852 "name": "Existed_Raid", 00:18:58.852 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:18:58.852 "strip_size_kb": 0, 00:18:58.852 "state": "configuring", 00:18:58.852 "raid_level": "raid1", 00:18:58.852 "superblock": true, 00:18:58.852 "num_base_bdevs": 4, 00:18:58.852 "num_base_bdevs_discovered": 3, 00:18:58.852 "num_base_bdevs_operational": 4, 00:18:58.852 "base_bdevs_list": [ 00:18:58.852 { 00:18:58.852 "name": "BaseBdev1", 00:18:58.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.852 "is_configured": false, 00:18:58.852 "data_offset": 0, 00:18:58.852 "data_size": 0 00:18:58.852 }, 00:18:58.852 { 00:18:58.852 "name": "BaseBdev2", 00:18:58.852 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:18:58.852 "is_configured": true, 00:18:58.852 "data_offset": 2048, 00:18:58.852 "data_size": 63488 00:18:58.852 }, 00:18:58.852 { 00:18:58.852 "name": "BaseBdev3", 00:18:58.852 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:18:58.852 "is_configured": true, 00:18:58.852 "data_offset": 2048, 00:18:58.852 "data_size": 63488 00:18:58.852 }, 00:18:58.852 { 00:18:58.852 "name": "BaseBdev4", 00:18:58.852 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:18:58.852 "is_configured": true, 00:18:58.852 "data_offset": 2048, 00:18:58.853 "data_size": 63488 00:18:58.853 } 00:18:58.853 ] 00:18:58.853 }' 00:18:58.853 03:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.853 03:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.420 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:59.679 [2024-05-15 03:13:30.803913] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.679 03:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.937 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.937 "name": "Existed_Raid", 00:18:59.937 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:18:59.937 "strip_size_kb": 0, 00:18:59.937 "state": "configuring", 00:18:59.937 "raid_level": "raid1", 00:18:59.937 "superblock": true, 00:18:59.937 "num_base_bdevs": 4, 00:18:59.937 "num_base_bdevs_discovered": 2, 00:18:59.937 "num_base_bdevs_operational": 4, 00:18:59.937 "base_bdevs_list": [ 00:18:59.937 { 00:18:59.937 "name": "BaseBdev1", 00:18:59.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.937 "is_configured": false, 00:18:59.937 "data_offset": 0, 00:18:59.937 "data_size": 0 00:18:59.937 }, 00:18:59.937 { 00:18:59.937 "name": null, 00:18:59.937 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:18:59.937 "is_configured": false, 00:18:59.937 "data_offset": 2048, 00:18:59.937 "data_size": 63488 00:18:59.937 }, 00:18:59.937 { 00:18:59.937 "name": "BaseBdev3", 00:18:59.937 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:18:59.937 "is_configured": true, 00:18:59.937 "data_offset": 2048, 00:18:59.937 "data_size": 63488 00:18:59.937 }, 00:18:59.937 { 00:18:59.937 "name": "BaseBdev4", 00:18:59.937 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:18:59.937 "is_configured": true, 00:18:59.937 "data_offset": 2048, 00:18:59.937 "data_size": 63488 00:18:59.937 } 00:18:59.937 ] 00:18:59.937 }' 00:18:59.937 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.937 03:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.870 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.870 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:00.870 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:19:00.870 03:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.128 [2024-05-15 03:13:32.186923] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.128 BaseBdev1 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:01.128 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.385 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:01.643 [ 00:19:01.643 { 00:19:01.643 "name": "BaseBdev1", 00:19:01.643 "aliases": [ 00:19:01.643 "25344a89-b8d0-453a-bd9b-b6fb495785f0" 00:19:01.643 ], 00:19:01.643 "product_name": "Malloc disk", 00:19:01.643 "block_size": 512, 00:19:01.643 "num_blocks": 65536, 00:19:01.643 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:01.643 "assigned_rate_limits": { 00:19:01.643 "rw_ios_per_sec": 0, 00:19:01.643 "rw_mbytes_per_sec": 0, 00:19:01.643 "r_mbytes_per_sec": 0, 00:19:01.643 "w_mbytes_per_sec": 0 00:19:01.643 }, 00:19:01.643 "claimed": true, 00:19:01.643 "claim_type": "exclusive_write", 00:19:01.643 "zoned": false, 00:19:01.643 "supported_io_types": { 00:19:01.643 "read": true, 00:19:01.643 "write": true, 00:19:01.643 "unmap": true, 00:19:01.643 "write_zeroes": true, 00:19:01.643 "flush": true, 00:19:01.643 "reset": true, 00:19:01.643 "compare": false, 00:19:01.643 "compare_and_write": false, 00:19:01.643 "abort": true, 00:19:01.643 "nvme_admin": false, 00:19:01.643 "nvme_io": false 00:19:01.643 }, 00:19:01.643 "memory_domains": [ 00:19:01.643 { 00:19:01.643 "dma_device_id": "system", 00:19:01.643 "dma_device_type": 1 00:19:01.643 }, 00:19:01.643 { 00:19:01.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.643 "dma_device_type": 2 00:19:01.643 } 00:19:01.643 ], 00:19:01.643 "driver_specific": {} 00:19:01.643 } 00:19:01.643 ] 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.643 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.901 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.901 "name": "Existed_Raid", 00:19:01.901 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:01.901 "strip_size_kb": 0, 00:19:01.901 "state": "configuring", 00:19:01.901 "raid_level": "raid1", 00:19:01.901 "superblock": true, 00:19:01.901 "num_base_bdevs": 4, 00:19:01.901 "num_base_bdevs_discovered": 3, 00:19:01.901 "num_base_bdevs_operational": 4, 00:19:01.901 "base_bdevs_list": [ 00:19:01.901 { 00:19:01.901 "name": "BaseBdev1", 00:19:01.901 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:01.901 "is_configured": true, 00:19:01.901 "data_offset": 2048, 00:19:01.901 "data_size": 63488 00:19:01.901 }, 00:19:01.901 { 00:19:01.901 "name": null, 00:19:01.901 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:01.901 "is_configured": false, 00:19:01.901 "data_offset": 2048, 00:19:01.901 "data_size": 63488 00:19:01.901 }, 00:19:01.901 { 00:19:01.901 "name": "BaseBdev3", 00:19:01.901 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:01.901 "is_configured": true, 00:19:01.901 "data_offset": 2048, 00:19:01.901 "data_size": 63488 00:19:01.901 }, 00:19:01.901 { 00:19:01.901 "name": "BaseBdev4", 00:19:01.901 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:01.901 "is_configured": true, 00:19:01.901 "data_offset": 2048, 00:19:01.901 "data_size": 63488 00:19:01.901 } 00:19:01.901 ] 00:19:01.901 }' 00:19:01.901 03:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.901 03:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.466 03:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.466 03:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:02.723 03:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:02.723 03:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:02.981 [2024-05-15 03:13:34.059990] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.981 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.238 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.238 "name": "Existed_Raid", 00:19:03.239 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:03.239 "strip_size_kb": 0, 00:19:03.239 "state": "configuring", 00:19:03.239 "raid_level": "raid1", 00:19:03.239 "superblock": true, 00:19:03.239 "num_base_bdevs": 4, 00:19:03.239 "num_base_bdevs_discovered": 2, 00:19:03.239 "num_base_bdevs_operational": 4, 00:19:03.239 "base_bdevs_list": [ 00:19:03.239 { 00:19:03.239 "name": "BaseBdev1", 00:19:03.239 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:03.239 "is_configured": true, 00:19:03.239 "data_offset": 2048, 00:19:03.239 "data_size": 63488 00:19:03.239 }, 00:19:03.239 { 00:19:03.239 "name": null, 00:19:03.239 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:03.239 "is_configured": false, 00:19:03.239 "data_offset": 2048, 00:19:03.239 "data_size": 63488 00:19:03.239 }, 00:19:03.239 { 00:19:03.239 "name": null, 00:19:03.239 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:03.239 "is_configured": false, 00:19:03.239 "data_offset": 2048, 00:19:03.239 "data_size": 63488 00:19:03.239 }, 00:19:03.239 { 00:19:03.239 "name": "BaseBdev4", 00:19:03.239 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:03.239 "is_configured": true, 00:19:03.239 "data_offset": 2048, 00:19:03.239 "data_size": 63488 00:19:03.239 } 00:19:03.239 ] 00:19:03.239 }' 00:19:03.239 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.239 03:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.804 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.804 03:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:04.061 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:19:04.061 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:04.319 [2024-05-15 03:13:35.443866] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.319 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.576 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.576 "name": "Existed_Raid", 00:19:04.576 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:04.576 "strip_size_kb": 0, 00:19:04.576 "state": "configuring", 00:19:04.576 "raid_level": "raid1", 00:19:04.576 "superblock": true, 00:19:04.576 "num_base_bdevs": 4, 00:19:04.576 "num_base_bdevs_discovered": 3, 00:19:04.576 "num_base_bdevs_operational": 4, 00:19:04.576 "base_bdevs_list": [ 00:19:04.576 { 00:19:04.576 "name": "BaseBdev1", 00:19:04.576 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:04.576 "is_configured": true, 00:19:04.576 "data_offset": 2048, 00:19:04.576 "data_size": 63488 00:19:04.576 }, 00:19:04.576 { 00:19:04.576 "name": null, 00:19:04.576 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:04.576 "is_configured": false, 00:19:04.576 "data_offset": 2048, 00:19:04.576 "data_size": 63488 00:19:04.576 }, 00:19:04.576 { 00:19:04.576 "name": "BaseBdev3", 00:19:04.576 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:04.576 "is_configured": true, 00:19:04.576 "data_offset": 2048, 00:19:04.577 "data_size": 63488 00:19:04.577 }, 00:19:04.577 { 00:19:04.577 "name": "BaseBdev4", 00:19:04.577 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:04.577 "is_configured": true, 00:19:04.577 "data_offset": 2048, 00:19:04.577 "data_size": 63488 00:19:04.577 } 00:19:04.577 ] 00:19:04.577 }' 00:19:04.577 03:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.577 03:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.509 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.509 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:05.509 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:19:05.510 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.768 [2024-05-15 03:13:36.835604] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.768 03:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.027 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.027 "name": "Existed_Raid", 00:19:06.027 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:06.027 "strip_size_kb": 0, 00:19:06.027 "state": "configuring", 00:19:06.027 "raid_level": "raid1", 00:19:06.027 "superblock": true, 00:19:06.027 "num_base_bdevs": 4, 00:19:06.027 "num_base_bdevs_discovered": 2, 00:19:06.027 "num_base_bdevs_operational": 4, 00:19:06.027 "base_bdevs_list": [ 00:19:06.027 { 00:19:06.027 "name": null, 00:19:06.027 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:06.027 "is_configured": false, 00:19:06.027 "data_offset": 2048, 00:19:06.027 "data_size": 63488 00:19:06.027 }, 00:19:06.027 { 00:19:06.027 "name": null, 00:19:06.027 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:06.027 "is_configured": false, 00:19:06.027 "data_offset": 2048, 00:19:06.027 "data_size": 63488 00:19:06.027 }, 00:19:06.027 { 00:19:06.027 "name": "BaseBdev3", 00:19:06.027 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:06.027 "is_configured": true, 00:19:06.027 "data_offset": 2048, 00:19:06.027 "data_size": 63488 00:19:06.027 }, 00:19:06.027 { 00:19:06.027 "name": "BaseBdev4", 00:19:06.027 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:06.027 "is_configured": true, 00:19:06.027 "data_offset": 2048, 00:19:06.027 "data_size": 63488 00:19:06.027 } 00:19:06.027 ] 00:19:06.027 }' 00:19:06.027 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.027 03:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.593 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:06.851 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:19:06.851 03:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:07.108 [2024-05-15 03:13:38.229835] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.108 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.366 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.366 "name": "Existed_Raid", 00:19:07.366 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:07.366 "strip_size_kb": 0, 00:19:07.366 "state": "configuring", 00:19:07.366 "raid_level": "raid1", 00:19:07.366 "superblock": true, 00:19:07.366 "num_base_bdevs": 4, 00:19:07.366 "num_base_bdevs_discovered": 3, 00:19:07.366 "num_base_bdevs_operational": 4, 00:19:07.366 "base_bdevs_list": [ 00:19:07.366 { 00:19:07.366 "name": null, 00:19:07.366 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:07.366 "is_configured": false, 00:19:07.366 "data_offset": 2048, 00:19:07.366 "data_size": 63488 00:19:07.366 }, 00:19:07.366 { 00:19:07.366 "name": "BaseBdev2", 00:19:07.366 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:07.366 "is_configured": true, 00:19:07.366 "data_offset": 2048, 00:19:07.366 "data_size": 63488 00:19:07.366 }, 00:19:07.366 { 00:19:07.366 "name": "BaseBdev3", 00:19:07.366 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:07.366 "is_configured": true, 00:19:07.366 "data_offset": 2048, 00:19:07.366 "data_size": 63488 00:19:07.366 }, 00:19:07.366 { 00:19:07.366 "name": "BaseBdev4", 00:19:07.366 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:07.366 "is_configured": true, 00:19:07.366 "data_offset": 2048, 00:19:07.366 "data_size": 63488 00:19:07.366 } 00:19:07.366 ] 00:19:07.366 }' 00:19:07.366 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.366 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.329 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.329 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:08.329 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:19:08.329 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.329 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:08.587 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 25344a89-b8d0-453a-bd9b-b6fb495785f0 00:19:08.844 [2024-05-15 03:13:39.785170] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:08.844 [2024-05-15 03:13:39.785332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x13fcc10 00:19:08.844 [2024-05-15 03:13:39.785344] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.844 [2024-05-15 03:13:39.785530] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1409de0 00:19:08.844 [2024-05-15 03:13:39.785666] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13fcc10 00:19:08.844 [2024-05-15 03:13:39.785675] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x13fcc10 00:19:08.845 [2024-05-15 03:13:39.785769] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.845 NewBaseBdev 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:08.845 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.102 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:09.360 [ 00:19:09.360 { 00:19:09.360 "name": "NewBaseBdev", 00:19:09.360 "aliases": [ 00:19:09.360 "25344a89-b8d0-453a-bd9b-b6fb495785f0" 00:19:09.360 ], 00:19:09.360 "product_name": "Malloc disk", 00:19:09.360 "block_size": 512, 00:19:09.360 "num_blocks": 65536, 00:19:09.360 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:09.360 "assigned_rate_limits": { 00:19:09.360 "rw_ios_per_sec": 0, 00:19:09.360 "rw_mbytes_per_sec": 0, 00:19:09.360 "r_mbytes_per_sec": 0, 00:19:09.360 "w_mbytes_per_sec": 0 00:19:09.360 }, 00:19:09.360 "claimed": true, 00:19:09.360 "claim_type": "exclusive_write", 00:19:09.360 "zoned": false, 00:19:09.360 "supported_io_types": { 00:19:09.360 "read": true, 00:19:09.360 "write": true, 00:19:09.360 "unmap": true, 00:19:09.360 "write_zeroes": true, 00:19:09.360 "flush": true, 00:19:09.360 "reset": true, 00:19:09.360 "compare": false, 00:19:09.360 "compare_and_write": false, 00:19:09.360 "abort": true, 00:19:09.360 "nvme_admin": false, 00:19:09.360 "nvme_io": false 00:19:09.360 }, 00:19:09.360 "memory_domains": [ 00:19:09.360 { 00:19:09.360 "dma_device_id": "system", 00:19:09.360 "dma_device_type": 1 00:19:09.360 }, 00:19:09.360 { 00:19:09.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.360 "dma_device_type": 2 00:19:09.360 } 00:19:09.360 ], 00:19:09.360 "driver_specific": {} 00:19:09.360 } 00:19:09.360 ] 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.360 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.617 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.617 "name": "Existed_Raid", 00:19:09.617 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:09.617 "strip_size_kb": 0, 00:19:09.617 "state": "online", 00:19:09.617 "raid_level": "raid1", 00:19:09.617 "superblock": true, 00:19:09.618 "num_base_bdevs": 4, 00:19:09.618 "num_base_bdevs_discovered": 4, 00:19:09.618 "num_base_bdevs_operational": 4, 00:19:09.618 "base_bdevs_list": [ 00:19:09.618 { 00:19:09.618 "name": "NewBaseBdev", 00:19:09.618 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 2048, 00:19:09.618 "data_size": 63488 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": "BaseBdev2", 00:19:09.618 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 2048, 00:19:09.618 "data_size": 63488 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": "BaseBdev3", 00:19:09.618 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 2048, 00:19:09.618 "data_size": 63488 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": "BaseBdev4", 00:19:09.618 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 2048, 00:19:09.618 "data_size": 63488 00:19:09.618 } 00:19:09.618 ] 00:19:09.618 }' 00:19:09.618 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.618 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:10.182 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:10.440 [2024-05-15 03:13:41.389823] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:10.440 "name": "Existed_Raid", 00:19:10.440 "aliases": [ 00:19:10.440 "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee" 00:19:10.440 ], 00:19:10.440 "product_name": "Raid Volume", 00:19:10.440 "block_size": 512, 00:19:10.440 "num_blocks": 63488, 00:19:10.440 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:10.440 "assigned_rate_limits": { 00:19:10.440 "rw_ios_per_sec": 0, 00:19:10.440 "rw_mbytes_per_sec": 0, 00:19:10.440 "r_mbytes_per_sec": 0, 00:19:10.440 "w_mbytes_per_sec": 0 00:19:10.440 }, 00:19:10.440 "claimed": false, 00:19:10.440 "zoned": false, 00:19:10.440 "supported_io_types": { 00:19:10.440 "read": true, 00:19:10.440 "write": true, 00:19:10.440 "unmap": false, 00:19:10.440 "write_zeroes": true, 00:19:10.440 "flush": false, 00:19:10.440 "reset": true, 00:19:10.440 "compare": false, 00:19:10.440 "compare_and_write": false, 00:19:10.440 "abort": false, 00:19:10.440 "nvme_admin": false, 00:19:10.440 "nvme_io": false 00:19:10.440 }, 00:19:10.440 "memory_domains": [ 00:19:10.440 { 00:19:10.440 "dma_device_id": "system", 00:19:10.440 "dma_device_type": 1 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.440 "dma_device_type": 2 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "system", 00:19:10.440 "dma_device_type": 1 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.440 "dma_device_type": 2 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "system", 00:19:10.440 "dma_device_type": 1 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.440 "dma_device_type": 2 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "system", 00:19:10.440 "dma_device_type": 1 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.440 "dma_device_type": 2 00:19:10.440 } 00:19:10.440 ], 00:19:10.440 "driver_specific": { 00:19:10.440 "raid": { 00:19:10.440 "uuid": "f62a8a2c-ef87-4cc7-9bd2-eefd96a64bee", 00:19:10.440 "strip_size_kb": 0, 00:19:10.440 "state": "online", 00:19:10.440 "raid_level": "raid1", 00:19:10.440 "superblock": true, 00:19:10.440 "num_base_bdevs": 4, 00:19:10.440 "num_base_bdevs_discovered": 4, 00:19:10.440 "num_base_bdevs_operational": 4, 00:19:10.440 "base_bdevs_list": [ 00:19:10.440 { 00:19:10.440 "name": "NewBaseBdev", 00:19:10.440 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:10.440 "is_configured": true, 00:19:10.440 "data_offset": 2048, 00:19:10.440 "data_size": 63488 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "name": "BaseBdev2", 00:19:10.440 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:10.440 "is_configured": true, 00:19:10.440 "data_offset": 2048, 00:19:10.440 "data_size": 63488 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "name": "BaseBdev3", 00:19:10.440 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:10.440 "is_configured": true, 00:19:10.440 "data_offset": 2048, 00:19:10.440 "data_size": 63488 00:19:10.440 }, 00:19:10.440 { 00:19:10.440 "name": "BaseBdev4", 00:19:10.440 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:10.440 "is_configured": true, 00:19:10.440 "data_offset": 2048, 00:19:10.440 "data_size": 63488 00:19:10.440 } 00:19:10.440 ] 00:19:10.440 } 00:19:10.440 } 00:19:10.440 }' 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:19:10.440 BaseBdev2 00:19:10.440 BaseBdev3 00:19:10.440 BaseBdev4' 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:10.440 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:10.698 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:10.698 "name": "NewBaseBdev", 00:19:10.698 "aliases": [ 00:19:10.698 "25344a89-b8d0-453a-bd9b-b6fb495785f0" 00:19:10.698 ], 00:19:10.698 "product_name": "Malloc disk", 00:19:10.698 "block_size": 512, 00:19:10.698 "num_blocks": 65536, 00:19:10.698 "uuid": "25344a89-b8d0-453a-bd9b-b6fb495785f0", 00:19:10.698 "assigned_rate_limits": { 00:19:10.698 "rw_ios_per_sec": 0, 00:19:10.698 "rw_mbytes_per_sec": 0, 00:19:10.698 "r_mbytes_per_sec": 0, 00:19:10.698 "w_mbytes_per_sec": 0 00:19:10.698 }, 00:19:10.698 "claimed": true, 00:19:10.698 "claim_type": "exclusive_write", 00:19:10.698 "zoned": false, 00:19:10.698 "supported_io_types": { 00:19:10.698 "read": true, 00:19:10.698 "write": true, 00:19:10.698 "unmap": true, 00:19:10.698 "write_zeroes": true, 00:19:10.698 "flush": true, 00:19:10.698 "reset": true, 00:19:10.698 "compare": false, 00:19:10.698 "compare_and_write": false, 00:19:10.698 "abort": true, 00:19:10.698 "nvme_admin": false, 00:19:10.698 "nvme_io": false 00:19:10.698 }, 00:19:10.698 "memory_domains": [ 00:19:10.698 { 00:19:10.698 "dma_device_id": "system", 00:19:10.698 "dma_device_type": 1 00:19:10.698 }, 00:19:10.698 { 00:19:10.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.698 "dma_device_type": 2 00:19:10.698 } 00:19:10.698 ], 00:19:10.698 "driver_specific": {} 00:19:10.698 }' 00:19:10.698 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:10.698 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:10.698 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:10.698 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:10.955 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:10.955 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.955 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:10.955 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:10.955 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:11.213 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:11.213 "name": "BaseBdev2", 00:19:11.213 "aliases": [ 00:19:11.213 "3578aa54-7a8e-4186-83ea-50f4313cdf34" 00:19:11.213 ], 00:19:11.213 "product_name": "Malloc disk", 00:19:11.213 "block_size": 512, 00:19:11.213 "num_blocks": 65536, 00:19:11.213 "uuid": "3578aa54-7a8e-4186-83ea-50f4313cdf34", 00:19:11.213 "assigned_rate_limits": { 00:19:11.213 "rw_ios_per_sec": 0, 00:19:11.213 "rw_mbytes_per_sec": 0, 00:19:11.213 "r_mbytes_per_sec": 0, 00:19:11.213 "w_mbytes_per_sec": 0 00:19:11.213 }, 00:19:11.213 "claimed": true, 00:19:11.213 "claim_type": "exclusive_write", 00:19:11.213 "zoned": false, 00:19:11.213 "supported_io_types": { 00:19:11.213 "read": true, 00:19:11.213 "write": true, 00:19:11.213 "unmap": true, 00:19:11.213 "write_zeroes": true, 00:19:11.213 "flush": true, 00:19:11.213 "reset": true, 00:19:11.213 "compare": false, 00:19:11.213 "compare_and_write": false, 00:19:11.213 "abort": true, 00:19:11.213 "nvme_admin": false, 00:19:11.213 "nvme_io": false 00:19:11.213 }, 00:19:11.213 "memory_domains": [ 00:19:11.213 { 00:19:11.213 "dma_device_id": "system", 00:19:11.213 "dma_device_type": 1 00:19:11.213 }, 00:19:11.213 { 00:19:11.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.213 "dma_device_type": 2 00:19:11.213 } 00:19:11.213 ], 00:19:11.213 "driver_specific": {} 00:19:11.213 }' 00:19:11.213 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:11.470 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:11.726 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:11.983 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:11.983 "name": "BaseBdev3", 00:19:11.983 "aliases": [ 00:19:11.983 "0ee4b63c-144c-448e-80fb-6b8df4469231" 00:19:11.983 ], 00:19:11.983 "product_name": "Malloc disk", 00:19:11.983 "block_size": 512, 00:19:11.983 "num_blocks": 65536, 00:19:11.983 "uuid": "0ee4b63c-144c-448e-80fb-6b8df4469231", 00:19:11.983 "assigned_rate_limits": { 00:19:11.983 "rw_ios_per_sec": 0, 00:19:11.983 "rw_mbytes_per_sec": 0, 00:19:11.983 "r_mbytes_per_sec": 0, 00:19:11.983 "w_mbytes_per_sec": 0 00:19:11.983 }, 00:19:11.983 "claimed": true, 00:19:11.983 "claim_type": "exclusive_write", 00:19:11.983 "zoned": false, 00:19:11.983 "supported_io_types": { 00:19:11.983 "read": true, 00:19:11.983 "write": true, 00:19:11.983 "unmap": true, 00:19:11.983 "write_zeroes": true, 00:19:11.983 "flush": true, 00:19:11.983 "reset": true, 00:19:11.983 "compare": false, 00:19:11.983 "compare_and_write": false, 00:19:11.983 "abort": true, 00:19:11.983 "nvme_admin": false, 00:19:11.983 "nvme_io": false 00:19:11.983 }, 00:19:11.983 "memory_domains": [ 00:19:11.983 { 00:19:11.983 "dma_device_id": "system", 00:19:11.983 "dma_device_type": 1 00:19:11.983 }, 00:19:11.983 { 00:19:11.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.983 "dma_device_type": 2 00:19:11.983 } 00:19:11.983 ], 00:19:11.983 "driver_specific": {} 00:19:11.983 }' 00:19:11.983 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:11.983 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:11.983 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:11.983 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:11.983 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:12.240 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:12.496 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:12.496 "name": "BaseBdev4", 00:19:12.496 "aliases": [ 00:19:12.496 "c2a88669-741f-4613-841c-52098979fad1" 00:19:12.496 ], 00:19:12.496 "product_name": "Malloc disk", 00:19:12.496 "block_size": 512, 00:19:12.496 "num_blocks": 65536, 00:19:12.496 "uuid": "c2a88669-741f-4613-841c-52098979fad1", 00:19:12.496 "assigned_rate_limits": { 00:19:12.496 "rw_ios_per_sec": 0, 00:19:12.496 "rw_mbytes_per_sec": 0, 00:19:12.496 "r_mbytes_per_sec": 0, 00:19:12.496 "w_mbytes_per_sec": 0 00:19:12.496 }, 00:19:12.496 "claimed": true, 00:19:12.496 "claim_type": "exclusive_write", 00:19:12.496 "zoned": false, 00:19:12.496 "supported_io_types": { 00:19:12.496 "read": true, 00:19:12.496 "write": true, 00:19:12.496 "unmap": true, 00:19:12.496 "write_zeroes": true, 00:19:12.496 "flush": true, 00:19:12.496 "reset": true, 00:19:12.496 "compare": false, 00:19:12.496 "compare_and_write": false, 00:19:12.496 "abort": true, 00:19:12.496 "nvme_admin": false, 00:19:12.496 "nvme_io": false 00:19:12.496 }, 00:19:12.496 "memory_domains": [ 00:19:12.496 { 00:19:12.496 "dma_device_id": "system", 00:19:12.496 "dma_device_type": 1 00:19:12.496 }, 00:19:12.496 { 00:19:12.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.496 "dma_device_type": 2 00:19:12.496 } 00:19:12.496 ], 00:19:12.496 "driver_specific": {} 00:19:12.496 }' 00:19:12.496 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:12.496 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:12.751 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:13.008 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:13.008 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:13.008 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:13.266 [2024-05-15 03:13:44.201230] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.266 [2024-05-15 03:13:44.201257] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.266 [2024-05-15 03:13:44.201309] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.266 [2024-05-15 03:13:44.201594] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.266 [2024-05-15 03:13:44.201604] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13fcc10 name Existed_Raid, state offline 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 4139716 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4139716 ']' 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 4139716 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4139716 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4139716' 00:19:13.266 killing process with pid 4139716 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 4139716 00:19:13.266 [2024-05-15 03:13:44.264681] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.266 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 4139716 00:19:13.266 [2024-05-15 03:13:44.299298] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.523 03:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:19:13.523 00:19:13.523 real 0m32.903s 00:19:13.523 user 1m2.193s 00:19:13.523 sys 0m4.616s 00:19:13.523 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.523 03:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.523 ************************************ 00:19:13.523 END TEST raid_state_function_test_sb 00:19:13.523 ************************************ 00:19:13.523 03:13:44 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:13.523 03:13:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:13.523 03:13:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:13.523 03:13:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.523 ************************************ 00:19:13.523 START TEST raid_superblock_test 00:19:13.523 ************************************ 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=4145803 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 4145803 /var/tmp/spdk-raid.sock 00:19:13.523 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 4145803 ']' 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:13.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.524 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.524 [2024-05-15 03:13:44.658592] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:13.524 [2024-05-15 03:13:44.658648] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145803 ] 00:19:13.781 [2024-05-15 03:13:44.757713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.781 [2024-05-15 03:13:44.855299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.781 [2024-05-15 03:13:44.934042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.781 [2024-05-15 03:13:44.934086] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:14.712 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:14.712 malloc1 00:19:14.967 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:14.967 [2024-05-15 03:13:46.107733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:14.967 [2024-05-15 03:13:46.107777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.968 [2024-05-15 03:13:46.107801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248ca00 00:19:14.968 [2024-05-15 03:13:46.107811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.968 [2024-05-15 03:13:46.109445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.968 [2024-05-15 03:13:46.109473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:14.968 pt1 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:15.223 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:15.224 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:15.224 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:15.224 malloc2 00:19:15.481 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:15.481 [2024-05-15 03:13:46.633756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:15.481 [2024-05-15 03:13:46.633799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.481 [2024-05-15 03:13:46.633818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248d5f0 00:19:15.481 [2024-05-15 03:13:46.633827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.481 [2024-05-15 03:13:46.635382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.481 [2024-05-15 03:13:46.635410] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:15.481 pt2 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:15.738 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:15.738 malloc3 00:19:15.994 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:15.994 [2024-05-15 03:13:47.139677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:15.994 [2024-05-15 03:13:47.139722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.994 [2024-05-15 03:13:47.139738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2632900 00:19:15.994 [2024-05-15 03:13:47.139748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.994 [2024-05-15 03:13:47.141363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.994 [2024-05-15 03:13:47.141391] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:15.994 pt3 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.252 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:16.252 malloc4 00:19:16.509 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:16.509 [2024-05-15 03:13:47.641628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:16.509 [2024-05-15 03:13:47.641670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.509 [2024-05-15 03:13:47.641688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2484630 00:19:16.509 [2024-05-15 03:13:47.641698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.509 [2024-05-15 03:13:47.643212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.509 [2024-05-15 03:13:47.643239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:16.509 pt4 00:19:16.509 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:16.509 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:16.509 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:16.767 [2024-05-15 03:13:47.882285] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:16.767 [2024-05-15 03:13:47.883596] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.767 [2024-05-15 03:13:47.883653] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:16.767 [2024-05-15 03:13:47.883699] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:16.767 [2024-05-15 03:13:47.883888] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x2485900 00:19:16.767 [2024-05-15 03:13:47.883899] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:16.767 [2024-05-15 03:13:47.884096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24858d0 00:19:16.767 [2024-05-15 03:13:47.884257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2485900 00:19:16.767 [2024-05-15 03:13:47.884266] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2485900 00:19:16.767 [2024-05-15 03:13:47.884367] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.767 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.024 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.024 "name": "raid_bdev1", 00:19:17.024 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:17.024 "strip_size_kb": 0, 00:19:17.024 "state": "online", 00:19:17.024 "raid_level": "raid1", 00:19:17.024 "superblock": true, 00:19:17.024 "num_base_bdevs": 4, 00:19:17.024 "num_base_bdevs_discovered": 4, 00:19:17.024 "num_base_bdevs_operational": 4, 00:19:17.024 "base_bdevs_list": [ 00:19:17.024 { 00:19:17.024 "name": "pt1", 00:19:17.024 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:17.024 "is_configured": true, 00:19:17.024 "data_offset": 2048, 00:19:17.024 "data_size": 63488 00:19:17.024 }, 00:19:17.024 { 00:19:17.024 "name": "pt2", 00:19:17.024 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:17.024 "is_configured": true, 00:19:17.024 "data_offset": 2048, 00:19:17.024 "data_size": 63488 00:19:17.024 }, 00:19:17.024 { 00:19:17.024 "name": "pt3", 00:19:17.024 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:17.024 "is_configured": true, 00:19:17.024 "data_offset": 2048, 00:19:17.024 "data_size": 63488 00:19:17.024 }, 00:19:17.024 { 00:19:17.024 "name": "pt4", 00:19:17.024 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:17.024 "is_configured": true, 00:19:17.024 "data_offset": 2048, 00:19:17.024 "data_size": 63488 00:19:17.024 } 00:19:17.024 ] 00:19:17.024 }' 00:19:17.024 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.024 03:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:17.957 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:17.957 [2024-05-15 03:13:49.029620] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.957 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:17.957 "name": "raid_bdev1", 00:19:17.957 "aliases": [ 00:19:17.957 "1b54b816-c98a-443e-addb-bcf9aafde847" 00:19:17.957 ], 00:19:17.957 "product_name": "Raid Volume", 00:19:17.957 "block_size": 512, 00:19:17.957 "num_blocks": 63488, 00:19:17.957 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:17.957 "assigned_rate_limits": { 00:19:17.957 "rw_ios_per_sec": 0, 00:19:17.957 "rw_mbytes_per_sec": 0, 00:19:17.957 "r_mbytes_per_sec": 0, 00:19:17.957 "w_mbytes_per_sec": 0 00:19:17.957 }, 00:19:17.957 "claimed": false, 00:19:17.957 "zoned": false, 00:19:17.957 "supported_io_types": { 00:19:17.957 "read": true, 00:19:17.957 "write": true, 00:19:17.957 "unmap": false, 00:19:17.957 "write_zeroes": true, 00:19:17.957 "flush": false, 00:19:17.957 "reset": true, 00:19:17.957 "compare": false, 00:19:17.957 "compare_and_write": false, 00:19:17.957 "abort": false, 00:19:17.957 "nvme_admin": false, 00:19:17.957 "nvme_io": false 00:19:17.957 }, 00:19:17.957 "memory_domains": [ 00:19:17.957 { 00:19:17.957 "dma_device_id": "system", 00:19:17.957 "dma_device_type": 1 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.957 "dma_device_type": 2 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "system", 00:19:17.957 "dma_device_type": 1 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.957 "dma_device_type": 2 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "system", 00:19:17.957 "dma_device_type": 1 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.957 "dma_device_type": 2 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "system", 00:19:17.957 "dma_device_type": 1 00:19:17.957 }, 00:19:17.957 { 00:19:17.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.957 "dma_device_type": 2 00:19:17.957 } 00:19:17.957 ], 00:19:17.957 "driver_specific": { 00:19:17.957 "raid": { 00:19:17.957 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:17.957 "strip_size_kb": 0, 00:19:17.957 "state": "online", 00:19:17.957 "raid_level": "raid1", 00:19:17.957 "superblock": true, 00:19:17.957 "num_base_bdevs": 4, 00:19:17.958 "num_base_bdevs_discovered": 4, 00:19:17.958 "num_base_bdevs_operational": 4, 00:19:17.958 "base_bdevs_list": [ 00:19:17.958 { 00:19:17.958 "name": "pt1", 00:19:17.958 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:17.958 "is_configured": true, 00:19:17.958 "data_offset": 2048, 00:19:17.958 "data_size": 63488 00:19:17.958 }, 00:19:17.958 { 00:19:17.958 "name": "pt2", 00:19:17.958 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:17.958 "is_configured": true, 00:19:17.958 "data_offset": 2048, 00:19:17.958 "data_size": 63488 00:19:17.958 }, 00:19:17.958 { 00:19:17.958 "name": "pt3", 00:19:17.958 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:17.958 "is_configured": true, 00:19:17.958 "data_offset": 2048, 00:19:17.958 "data_size": 63488 00:19:17.958 }, 00:19:17.958 { 00:19:17.958 "name": "pt4", 00:19:17.958 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:17.958 "is_configured": true, 00:19:17.958 "data_offset": 2048, 00:19:17.958 "data_size": 63488 00:19:17.958 } 00:19:17.958 ] 00:19:17.958 } 00:19:17.958 } 00:19:17.958 }' 00:19:17.958 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.958 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:17.958 pt2 00:19:17.958 pt3 00:19:17.958 pt4' 00:19:17.958 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:17.958 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:17.958 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:18.215 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:18.215 "name": "pt1", 00:19:18.215 "aliases": [ 00:19:18.215 "2d2af25c-ef24-55e2-a927-b34266727ffc" 00:19:18.215 ], 00:19:18.215 "product_name": "passthru", 00:19:18.215 "block_size": 512, 00:19:18.215 "num_blocks": 65536, 00:19:18.215 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:18.215 "assigned_rate_limits": { 00:19:18.215 "rw_ios_per_sec": 0, 00:19:18.215 "rw_mbytes_per_sec": 0, 00:19:18.215 "r_mbytes_per_sec": 0, 00:19:18.215 "w_mbytes_per_sec": 0 00:19:18.215 }, 00:19:18.215 "claimed": true, 00:19:18.215 "claim_type": "exclusive_write", 00:19:18.215 "zoned": false, 00:19:18.215 "supported_io_types": { 00:19:18.215 "read": true, 00:19:18.215 "write": true, 00:19:18.215 "unmap": true, 00:19:18.215 "write_zeroes": true, 00:19:18.215 "flush": true, 00:19:18.215 "reset": true, 00:19:18.215 "compare": false, 00:19:18.215 "compare_and_write": false, 00:19:18.215 "abort": true, 00:19:18.215 "nvme_admin": false, 00:19:18.215 "nvme_io": false 00:19:18.215 }, 00:19:18.215 "memory_domains": [ 00:19:18.215 { 00:19:18.215 "dma_device_id": "system", 00:19:18.215 "dma_device_type": 1 00:19:18.215 }, 00:19:18.215 { 00:19:18.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.215 "dma_device_type": 2 00:19:18.215 } 00:19:18.215 ], 00:19:18.215 "driver_specific": { 00:19:18.215 "passthru": { 00:19:18.215 "name": "pt1", 00:19:18.215 "base_bdev_name": "malloc1" 00:19:18.215 } 00:19:18.215 } 00:19:18.215 }' 00:19:18.215 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:18.472 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:18.729 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:18.986 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:18.986 "name": "pt2", 00:19:18.986 "aliases": [ 00:19:18.986 "e4c6f003-380b-597b-81e9-cdeec199d783" 00:19:18.986 ], 00:19:18.986 "product_name": "passthru", 00:19:18.986 "block_size": 512, 00:19:18.986 "num_blocks": 65536, 00:19:18.986 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:18.986 "assigned_rate_limits": { 00:19:18.986 "rw_ios_per_sec": 0, 00:19:18.986 "rw_mbytes_per_sec": 0, 00:19:18.986 "r_mbytes_per_sec": 0, 00:19:18.986 "w_mbytes_per_sec": 0 00:19:18.986 }, 00:19:18.986 "claimed": true, 00:19:18.986 "claim_type": "exclusive_write", 00:19:18.986 "zoned": false, 00:19:18.986 "supported_io_types": { 00:19:18.986 "read": true, 00:19:18.986 "write": true, 00:19:18.986 "unmap": true, 00:19:18.986 "write_zeroes": true, 00:19:18.986 "flush": true, 00:19:18.986 "reset": true, 00:19:18.986 "compare": false, 00:19:18.986 "compare_and_write": false, 00:19:18.986 "abort": true, 00:19:18.986 "nvme_admin": false, 00:19:18.986 "nvme_io": false 00:19:18.986 }, 00:19:18.986 "memory_domains": [ 00:19:18.986 { 00:19:18.986 "dma_device_id": "system", 00:19:18.986 "dma_device_type": 1 00:19:18.986 }, 00:19:18.986 { 00:19:18.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.986 "dma_device_type": 2 00:19:18.986 } 00:19:18.986 ], 00:19:18.986 "driver_specific": { 00:19:18.986 "passthru": { 00:19:18.986 "name": "pt2", 00:19:18.986 "base_bdev_name": "malloc2" 00:19:18.986 } 00:19:18.986 } 00:19:18.986 }' 00:19:18.986 03:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:18.986 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:18.986 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:18.986 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:18.986 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:19.243 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:19.500 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:19.500 "name": "pt3", 00:19:19.500 "aliases": [ 00:19:19.500 "8965618a-5e71-5b59-a061-f24a40fbc5f8" 00:19:19.500 ], 00:19:19.500 "product_name": "passthru", 00:19:19.500 "block_size": 512, 00:19:19.500 "num_blocks": 65536, 00:19:19.500 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:19.500 "assigned_rate_limits": { 00:19:19.500 "rw_ios_per_sec": 0, 00:19:19.500 "rw_mbytes_per_sec": 0, 00:19:19.500 "r_mbytes_per_sec": 0, 00:19:19.500 "w_mbytes_per_sec": 0 00:19:19.500 }, 00:19:19.500 "claimed": true, 00:19:19.500 "claim_type": "exclusive_write", 00:19:19.500 "zoned": false, 00:19:19.500 "supported_io_types": { 00:19:19.500 "read": true, 00:19:19.500 "write": true, 00:19:19.500 "unmap": true, 00:19:19.500 "write_zeroes": true, 00:19:19.500 "flush": true, 00:19:19.500 "reset": true, 00:19:19.500 "compare": false, 00:19:19.500 "compare_and_write": false, 00:19:19.500 "abort": true, 00:19:19.500 "nvme_admin": false, 00:19:19.500 "nvme_io": false 00:19:19.500 }, 00:19:19.500 "memory_domains": [ 00:19:19.500 { 00:19:19.500 "dma_device_id": "system", 00:19:19.500 "dma_device_type": 1 00:19:19.500 }, 00:19:19.500 { 00:19:19.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.500 "dma_device_type": 2 00:19:19.500 } 00:19:19.500 ], 00:19:19.500 "driver_specific": { 00:19:19.500 "passthru": { 00:19:19.500 "name": "pt3", 00:19:19.500 "base_bdev_name": "malloc3" 00:19:19.500 } 00:19:19.500 } 00:19:19.500 }' 00:19:19.500 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:19.500 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:19.757 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.014 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.014 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:20.014 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:20.014 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:20.014 03:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:20.271 "name": "pt4", 00:19:20.271 "aliases": [ 00:19:20.271 "a96989d1-baf5-5e69-a52b-1706364d8618" 00:19:20.271 ], 00:19:20.271 "product_name": "passthru", 00:19:20.271 "block_size": 512, 00:19:20.271 "num_blocks": 65536, 00:19:20.271 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:20.271 "assigned_rate_limits": { 00:19:20.271 "rw_ios_per_sec": 0, 00:19:20.271 "rw_mbytes_per_sec": 0, 00:19:20.271 "r_mbytes_per_sec": 0, 00:19:20.271 "w_mbytes_per_sec": 0 00:19:20.271 }, 00:19:20.271 "claimed": true, 00:19:20.271 "claim_type": "exclusive_write", 00:19:20.271 "zoned": false, 00:19:20.271 "supported_io_types": { 00:19:20.271 "read": true, 00:19:20.271 "write": true, 00:19:20.271 "unmap": true, 00:19:20.271 "write_zeroes": true, 00:19:20.271 "flush": true, 00:19:20.271 "reset": true, 00:19:20.271 "compare": false, 00:19:20.271 "compare_and_write": false, 00:19:20.271 "abort": true, 00:19:20.271 "nvme_admin": false, 00:19:20.271 "nvme_io": false 00:19:20.271 }, 00:19:20.271 "memory_domains": [ 00:19:20.271 { 00:19:20.271 "dma_device_id": "system", 00:19:20.271 "dma_device_type": 1 00:19:20.271 }, 00:19:20.271 { 00:19:20.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.271 "dma_device_type": 2 00:19:20.271 } 00:19:20.271 ], 00:19:20.271 "driver_specific": { 00:19:20.271 "passthru": { 00:19:20.271 "name": "pt4", 00:19:20.271 "base_bdev_name": "malloc4" 00:19:20.271 } 00:19:20.271 } 00:19:20.271 }' 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.271 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:20.529 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:20.786 [2024-05-15 03:13:51.825115] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.786 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1b54b816-c98a-443e-addb-bcf9aafde847 00:19:20.786 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1b54b816-c98a-443e-addb-bcf9aafde847 ']' 00:19:20.786 03:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:21.043 [2024-05-15 03:13:52.077500] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.043 [2024-05-15 03:13:52.077520] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.043 [2024-05-15 03:13:52.077567] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.043 [2024-05-15 03:13:52.077659] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.043 [2024-05-15 03:13:52.077669] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2485900 name raid_bdev1, state offline 00:19:21.043 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.043 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:21.301 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:21.301 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:21.301 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.301 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:21.557 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.558 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:21.816 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.816 03:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:22.118 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.118 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:22.375 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:22.375 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:22.633 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.898 [2024-05-15 03:13:53.850157] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:22.898 [2024-05-15 03:13:53.851585] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:22.898 [2024-05-15 03:13:53.851629] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:22.898 [2024-05-15 03:13:53.851664] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:22.898 [2024-05-15 03:13:53.851708] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:22.898 [2024-05-15 03:13:53.851744] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:22.898 [2024-05-15 03:13:53.851765] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:22.898 [2024-05-15 03:13:53.851785] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:22.898 [2024-05-15 03:13:53.851799] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.898 [2024-05-15 03:13:53.851807] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2483ab0 name raid_bdev1, state configuring 00:19:22.898 request: 00:19:22.898 { 00:19:22.898 "name": "raid_bdev1", 00:19:22.898 "raid_level": "raid1", 00:19:22.898 "base_bdevs": [ 00:19:22.898 "malloc1", 00:19:22.898 "malloc2", 00:19:22.898 "malloc3", 00:19:22.898 "malloc4" 00:19:22.898 ], 00:19:22.898 "superblock": false, 00:19:22.898 "method": "bdev_raid_create", 00:19:22.898 "req_id": 1 00:19:22.898 } 00:19:22.898 Got JSON-RPC error response 00:19:22.898 response: 00:19:22.898 { 00:19:22.898 "code": -17, 00:19:22.898 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:22.898 } 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.898 03:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:23.156 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:23.156 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:23.156 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.413 [2024-05-15 03:13:54.355435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.413 [2024-05-15 03:13:54.355479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.413 [2024-05-15 03:13:54.355496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2486040 00:19:23.413 [2024-05-15 03:13:54.355506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.413 [2024-05-15 03:13:54.357171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.413 [2024-05-15 03:13:54.357199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.413 [2024-05-15 03:13:54.357262] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:23.413 [2024-05-15 03:13:54.357287] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.413 pt1 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.413 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.671 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.671 "name": "raid_bdev1", 00:19:23.671 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:23.671 "strip_size_kb": 0, 00:19:23.671 "state": "configuring", 00:19:23.671 "raid_level": "raid1", 00:19:23.671 "superblock": true, 00:19:23.671 "num_base_bdevs": 4, 00:19:23.671 "num_base_bdevs_discovered": 1, 00:19:23.671 "num_base_bdevs_operational": 4, 00:19:23.671 "base_bdevs_list": [ 00:19:23.671 { 00:19:23.671 "name": "pt1", 00:19:23.671 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:23.671 "is_configured": true, 00:19:23.671 "data_offset": 2048, 00:19:23.671 "data_size": 63488 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": null, 00:19:23.671 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:23.671 "is_configured": false, 00:19:23.671 "data_offset": 2048, 00:19:23.671 "data_size": 63488 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": null, 00:19:23.671 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:23.671 "is_configured": false, 00:19:23.671 "data_offset": 2048, 00:19:23.671 "data_size": 63488 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": null, 00:19:23.671 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:23.671 "is_configured": false, 00:19:23.671 "data_offset": 2048, 00:19:23.671 "data_size": 63488 00:19:23.671 } 00:19:23.671 ] 00:19:23.671 }' 00:19:23.671 03:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.671 03:13:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.236 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:24.236 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.493 [2024-05-15 03:13:55.454389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.493 [2024-05-15 03:13:55.454435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.493 [2024-05-15 03:13:55.454453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2485dd0 00:19:24.493 [2024-05-15 03:13:55.454462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.493 [2024-05-15 03:13:55.454801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.493 [2024-05-15 03:13:55.454816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.493 [2024-05-15 03:13:55.454902] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:24.493 [2024-05-15 03:13:55.454924] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.493 pt2 00:19:24.493 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:24.752 [2024-05-15 03:13:55.707182] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.752 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.011 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.011 "name": "raid_bdev1", 00:19:25.011 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:25.011 "strip_size_kb": 0, 00:19:25.011 "state": "configuring", 00:19:25.011 "raid_level": "raid1", 00:19:25.011 "superblock": true, 00:19:25.011 "num_base_bdevs": 4, 00:19:25.011 "num_base_bdevs_discovered": 1, 00:19:25.011 "num_base_bdevs_operational": 4, 00:19:25.011 "base_bdevs_list": [ 00:19:25.011 { 00:19:25.011 "name": "pt1", 00:19:25.011 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:25.011 "is_configured": true, 00:19:25.011 "data_offset": 2048, 00:19:25.011 "data_size": 63488 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": null, 00:19:25.011 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:25.011 "is_configured": false, 00:19:25.011 "data_offset": 2048, 00:19:25.011 "data_size": 63488 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": null, 00:19:25.011 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:25.011 "is_configured": false, 00:19:25.011 "data_offset": 2048, 00:19:25.011 "data_size": 63488 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": null, 00:19:25.011 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:25.011 "is_configured": false, 00:19:25.011 "data_offset": 2048, 00:19:25.011 "data_size": 63488 00:19:25.011 } 00:19:25.011 ] 00:19:25.011 }' 00:19:25.011 03:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.011 03:13:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.576 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:25.576 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:25.576 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:25.834 [2024-05-15 03:13:56.830203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:25.834 [2024-05-15 03:13:56.830250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.834 [2024-05-15 03:13:56.830270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2632d40 00:19:25.834 [2024-05-15 03:13:56.830279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.834 [2024-05-15 03:13:56.830614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.834 [2024-05-15 03:13:56.830629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:25.834 [2024-05-15 03:13:56.830688] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:25.834 [2024-05-15 03:13:56.830705] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:25.834 pt2 00:19:25.834 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:25.834 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:25.834 03:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.092 [2024-05-15 03:13:57.086890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.092 [2024-05-15 03:13:57.086932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.092 [2024-05-15 03:13:57.086949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263df80 00:19:26.092 [2024-05-15 03:13:57.086958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.092 [2024-05-15 03:13:57.087272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.092 [2024-05-15 03:13:57.087286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.092 [2024-05-15 03:13:57.087341] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:26.092 [2024-05-15 03:13:57.087357] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:26.092 pt3 00:19:26.092 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.092 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.092 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:26.350 [2024-05-15 03:13:57.343573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:26.350 [2024-05-15 03:13:57.343610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.350 [2024-05-15 03:13:57.343625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24852d0 00:19:26.350 [2024-05-15 03:13:57.343634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.350 [2024-05-15 03:13:57.343965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.350 [2024-05-15 03:13:57.343981] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:26.350 [2024-05-15 03:13:57.344035] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:26.350 [2024-05-15 03:13:57.344053] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:26.350 [2024-05-15 03:13:57.344177] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x2486340 00:19:26.350 [2024-05-15 03:13:57.344186] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:26.350 [2024-05-15 03:13:57.344361] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2489330 00:19:26.350 [2024-05-15 03:13:57.344504] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2486340 00:19:26.350 [2024-05-15 03:13:57.344512] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2486340 00:19:26.350 [2024-05-15 03:13:57.344612] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.350 pt4 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.350 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.609 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.609 "name": "raid_bdev1", 00:19:26.609 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:26.609 "strip_size_kb": 0, 00:19:26.609 "state": "online", 00:19:26.609 "raid_level": "raid1", 00:19:26.609 "superblock": true, 00:19:26.609 "num_base_bdevs": 4, 00:19:26.609 "num_base_bdevs_discovered": 4, 00:19:26.609 "num_base_bdevs_operational": 4, 00:19:26.609 "base_bdevs_list": [ 00:19:26.609 { 00:19:26.609 "name": "pt1", 00:19:26.609 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:26.609 "is_configured": true, 00:19:26.609 "data_offset": 2048, 00:19:26.609 "data_size": 63488 00:19:26.609 }, 00:19:26.609 { 00:19:26.609 "name": "pt2", 00:19:26.609 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:26.609 "is_configured": true, 00:19:26.609 "data_offset": 2048, 00:19:26.609 "data_size": 63488 00:19:26.609 }, 00:19:26.609 { 00:19:26.609 "name": "pt3", 00:19:26.609 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:26.609 "is_configured": true, 00:19:26.609 "data_offset": 2048, 00:19:26.609 "data_size": 63488 00:19:26.609 }, 00:19:26.609 { 00:19:26.609 "name": "pt4", 00:19:26.609 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:26.610 "is_configured": true, 00:19:26.610 "data_offset": 2048, 00:19:26.610 "data_size": 63488 00:19:26.610 } 00:19:26.610 ] 00:19:26.610 }' 00:19:26.610 03:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.610 03:13:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:27.175 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:27.433 [2024-05-15 03:13:58.454847] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.433 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:27.433 "name": "raid_bdev1", 00:19:27.433 "aliases": [ 00:19:27.433 "1b54b816-c98a-443e-addb-bcf9aafde847" 00:19:27.433 ], 00:19:27.433 "product_name": "Raid Volume", 00:19:27.433 "block_size": 512, 00:19:27.433 "num_blocks": 63488, 00:19:27.433 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:27.433 "assigned_rate_limits": { 00:19:27.433 "rw_ios_per_sec": 0, 00:19:27.433 "rw_mbytes_per_sec": 0, 00:19:27.433 "r_mbytes_per_sec": 0, 00:19:27.433 "w_mbytes_per_sec": 0 00:19:27.433 }, 00:19:27.433 "claimed": false, 00:19:27.433 "zoned": false, 00:19:27.433 "supported_io_types": { 00:19:27.433 "read": true, 00:19:27.433 "write": true, 00:19:27.433 "unmap": false, 00:19:27.433 "write_zeroes": true, 00:19:27.433 "flush": false, 00:19:27.433 "reset": true, 00:19:27.433 "compare": false, 00:19:27.433 "compare_and_write": false, 00:19:27.433 "abort": false, 00:19:27.433 "nvme_admin": false, 00:19:27.433 "nvme_io": false 00:19:27.433 }, 00:19:27.433 "memory_domains": [ 00:19:27.433 { 00:19:27.433 "dma_device_id": "system", 00:19:27.433 "dma_device_type": 1 00:19:27.433 }, 00:19:27.433 { 00:19:27.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.433 "dma_device_type": 2 00:19:27.433 }, 00:19:27.433 { 00:19:27.433 "dma_device_id": "system", 00:19:27.433 "dma_device_type": 1 00:19:27.433 }, 00:19:27.433 { 00:19:27.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.434 "dma_device_type": 2 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "dma_device_id": "system", 00:19:27.434 "dma_device_type": 1 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.434 "dma_device_type": 2 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "dma_device_id": "system", 00:19:27.434 "dma_device_type": 1 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.434 "dma_device_type": 2 00:19:27.434 } 00:19:27.434 ], 00:19:27.434 "driver_specific": { 00:19:27.434 "raid": { 00:19:27.434 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:27.434 "strip_size_kb": 0, 00:19:27.434 "state": "online", 00:19:27.434 "raid_level": "raid1", 00:19:27.434 "superblock": true, 00:19:27.434 "num_base_bdevs": 4, 00:19:27.434 "num_base_bdevs_discovered": 4, 00:19:27.434 "num_base_bdevs_operational": 4, 00:19:27.434 "base_bdevs_list": [ 00:19:27.434 { 00:19:27.434 "name": "pt1", 00:19:27.434 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:27.434 "is_configured": true, 00:19:27.434 "data_offset": 2048, 00:19:27.434 "data_size": 63488 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "name": "pt2", 00:19:27.434 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:27.434 "is_configured": true, 00:19:27.434 "data_offset": 2048, 00:19:27.434 "data_size": 63488 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "name": "pt3", 00:19:27.434 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:27.434 "is_configured": true, 00:19:27.434 "data_offset": 2048, 00:19:27.434 "data_size": 63488 00:19:27.434 }, 00:19:27.434 { 00:19:27.434 "name": "pt4", 00:19:27.434 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:27.434 "is_configured": true, 00:19:27.434 "data_offset": 2048, 00:19:27.434 "data_size": 63488 00:19:27.434 } 00:19:27.434 ] 00:19:27.434 } 00:19:27.434 } 00:19:27.434 }' 00:19:27.434 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.434 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:27.434 pt2 00:19:27.434 pt3 00:19:27.434 pt4' 00:19:27.434 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:27.434 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:27.434 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:27.691 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:27.691 "name": "pt1", 00:19:27.691 "aliases": [ 00:19:27.691 "2d2af25c-ef24-55e2-a927-b34266727ffc" 00:19:27.691 ], 00:19:27.691 "product_name": "passthru", 00:19:27.691 "block_size": 512, 00:19:27.691 "num_blocks": 65536, 00:19:27.691 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:27.691 "assigned_rate_limits": { 00:19:27.691 "rw_ios_per_sec": 0, 00:19:27.691 "rw_mbytes_per_sec": 0, 00:19:27.691 "r_mbytes_per_sec": 0, 00:19:27.691 "w_mbytes_per_sec": 0 00:19:27.691 }, 00:19:27.691 "claimed": true, 00:19:27.691 "claim_type": "exclusive_write", 00:19:27.691 "zoned": false, 00:19:27.691 "supported_io_types": { 00:19:27.691 "read": true, 00:19:27.691 "write": true, 00:19:27.691 "unmap": true, 00:19:27.691 "write_zeroes": true, 00:19:27.691 "flush": true, 00:19:27.691 "reset": true, 00:19:27.691 "compare": false, 00:19:27.691 "compare_and_write": false, 00:19:27.691 "abort": true, 00:19:27.691 "nvme_admin": false, 00:19:27.691 "nvme_io": false 00:19:27.691 }, 00:19:27.691 "memory_domains": [ 00:19:27.691 { 00:19:27.691 "dma_device_id": "system", 00:19:27.691 "dma_device_type": 1 00:19:27.691 }, 00:19:27.691 { 00:19:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.691 "dma_device_type": 2 00:19:27.691 } 00:19:27.691 ], 00:19:27.691 "driver_specific": { 00:19:27.691 "passthru": { 00:19:27.691 "name": "pt1", 00:19:27.691 "base_bdev_name": "malloc1" 00:19:27.691 } 00:19:27.691 } 00:19:27.691 }' 00:19:27.691 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.691 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.949 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:27.949 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.949 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.949 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.949 03:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:27.949 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:27.949 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.949 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:27.949 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:28.207 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:28.207 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:28.207 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:28.207 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:28.464 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:28.464 "name": "pt2", 00:19:28.464 "aliases": [ 00:19:28.464 "e4c6f003-380b-597b-81e9-cdeec199d783" 00:19:28.464 ], 00:19:28.464 "product_name": "passthru", 00:19:28.464 "block_size": 512, 00:19:28.464 "num_blocks": 65536, 00:19:28.464 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:28.464 "assigned_rate_limits": { 00:19:28.464 "rw_ios_per_sec": 0, 00:19:28.464 "rw_mbytes_per_sec": 0, 00:19:28.464 "r_mbytes_per_sec": 0, 00:19:28.464 "w_mbytes_per_sec": 0 00:19:28.464 }, 00:19:28.464 "claimed": true, 00:19:28.464 "claim_type": "exclusive_write", 00:19:28.464 "zoned": false, 00:19:28.464 "supported_io_types": { 00:19:28.464 "read": true, 00:19:28.464 "write": true, 00:19:28.464 "unmap": true, 00:19:28.464 "write_zeroes": true, 00:19:28.464 "flush": true, 00:19:28.464 "reset": true, 00:19:28.464 "compare": false, 00:19:28.464 "compare_and_write": false, 00:19:28.464 "abort": true, 00:19:28.464 "nvme_admin": false, 00:19:28.464 "nvme_io": false 00:19:28.464 }, 00:19:28.465 "memory_domains": [ 00:19:28.465 { 00:19:28.465 "dma_device_id": "system", 00:19:28.465 "dma_device_type": 1 00:19:28.465 }, 00:19:28.465 { 00:19:28.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.465 "dma_device_type": 2 00:19:28.465 } 00:19:28.465 ], 00:19:28.465 "driver_specific": { 00:19:28.465 "passthru": { 00:19:28.465 "name": "pt2", 00:19:28.465 "base_bdev_name": "malloc2" 00:19:28.465 } 00:19:28.465 } 00:19:28.465 }' 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:28.465 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:28.723 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:28.981 03:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:28.981 "name": "pt3", 00:19:28.981 "aliases": [ 00:19:28.981 "8965618a-5e71-5b59-a061-f24a40fbc5f8" 00:19:28.981 ], 00:19:28.981 "product_name": "passthru", 00:19:28.981 "block_size": 512, 00:19:28.981 "num_blocks": 65536, 00:19:28.981 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:28.981 "assigned_rate_limits": { 00:19:28.981 "rw_ios_per_sec": 0, 00:19:28.981 "rw_mbytes_per_sec": 0, 00:19:28.981 "r_mbytes_per_sec": 0, 00:19:28.981 "w_mbytes_per_sec": 0 00:19:28.981 }, 00:19:28.981 "claimed": true, 00:19:28.981 "claim_type": "exclusive_write", 00:19:28.981 "zoned": false, 00:19:28.981 "supported_io_types": { 00:19:28.981 "read": true, 00:19:28.981 "write": true, 00:19:28.981 "unmap": true, 00:19:28.981 "write_zeroes": true, 00:19:28.981 "flush": true, 00:19:28.981 "reset": true, 00:19:28.981 "compare": false, 00:19:28.981 "compare_and_write": false, 00:19:28.981 "abort": true, 00:19:28.981 "nvme_admin": false, 00:19:28.981 "nvme_io": false 00:19:28.981 }, 00:19:28.981 "memory_domains": [ 00:19:28.981 { 00:19:28.981 "dma_device_id": "system", 00:19:28.981 "dma_device_type": 1 00:19:28.981 }, 00:19:28.981 { 00:19:28.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.981 "dma_device_type": 2 00:19:28.981 } 00:19:28.981 ], 00:19:28.981 "driver_specific": { 00:19:28.981 "passthru": { 00:19:28.981 "name": "pt3", 00:19:28.981 "base_bdev_name": "malloc3" 00:19:28.981 } 00:19:28.981 } 00:19:28.982 }' 00:19:28.982 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.982 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.982 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:28.982 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:29.239 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:29.497 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:29.497 "name": "pt4", 00:19:29.497 "aliases": [ 00:19:29.497 "a96989d1-baf5-5e69-a52b-1706364d8618" 00:19:29.497 ], 00:19:29.497 "product_name": "passthru", 00:19:29.497 "block_size": 512, 00:19:29.497 "num_blocks": 65536, 00:19:29.497 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:29.497 "assigned_rate_limits": { 00:19:29.497 "rw_ios_per_sec": 0, 00:19:29.497 "rw_mbytes_per_sec": 0, 00:19:29.497 "r_mbytes_per_sec": 0, 00:19:29.497 "w_mbytes_per_sec": 0 00:19:29.497 }, 00:19:29.497 "claimed": true, 00:19:29.497 "claim_type": "exclusive_write", 00:19:29.497 "zoned": false, 00:19:29.497 "supported_io_types": { 00:19:29.497 "read": true, 00:19:29.497 "write": true, 00:19:29.497 "unmap": true, 00:19:29.497 "write_zeroes": true, 00:19:29.497 "flush": true, 00:19:29.497 "reset": true, 00:19:29.497 "compare": false, 00:19:29.497 "compare_and_write": false, 00:19:29.497 "abort": true, 00:19:29.497 "nvme_admin": false, 00:19:29.497 "nvme_io": false 00:19:29.497 }, 00:19:29.497 "memory_domains": [ 00:19:29.497 { 00:19:29.497 "dma_device_id": "system", 00:19:29.498 "dma_device_type": 1 00:19:29.498 }, 00:19:29.498 { 00:19:29.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.498 "dma_device_type": 2 00:19:29.498 } 00:19:29.498 ], 00:19:29.498 "driver_specific": { 00:19:29.498 "passthru": { 00:19:29.498 "name": "pt4", 00:19:29.498 "base_bdev_name": "malloc4" 00:19:29.498 } 00:19:29.498 } 00:19:29.498 }' 00:19:29.498 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:29.755 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.013 03:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.013 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:30.013 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.013 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:30.272 [2024-05-15 03:14:01.230440] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.272 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1b54b816-c98a-443e-addb-bcf9aafde847 '!=' 1b54b816-c98a-443e-addb-bcf9aafde847 ']' 00:19:30.272 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:30.272 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:30.272 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:19:30.272 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:30.531 [2024-05-15 03:14:01.486838] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.531 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.790 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.790 "name": "raid_bdev1", 00:19:30.790 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:30.790 "strip_size_kb": 0, 00:19:30.790 "state": "online", 00:19:30.790 "raid_level": "raid1", 00:19:30.790 "superblock": true, 00:19:30.790 "num_base_bdevs": 4, 00:19:30.790 "num_base_bdevs_discovered": 3, 00:19:30.790 "num_base_bdevs_operational": 3, 00:19:30.790 "base_bdevs_list": [ 00:19:30.790 { 00:19:30.790 "name": null, 00:19:30.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.790 "is_configured": false, 00:19:30.790 "data_offset": 2048, 00:19:30.790 "data_size": 63488 00:19:30.790 }, 00:19:30.790 { 00:19:30.790 "name": "pt2", 00:19:30.790 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:30.790 "is_configured": true, 00:19:30.790 "data_offset": 2048, 00:19:30.790 "data_size": 63488 00:19:30.790 }, 00:19:30.790 { 00:19:30.790 "name": "pt3", 00:19:30.790 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:30.790 "is_configured": true, 00:19:30.790 "data_offset": 2048, 00:19:30.790 "data_size": 63488 00:19:30.790 }, 00:19:30.790 { 00:19:30.790 "name": "pt4", 00:19:30.790 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:30.790 "is_configured": true, 00:19:30.790 "data_offset": 2048, 00:19:30.790 "data_size": 63488 00:19:30.790 } 00:19:30.790 ] 00:19:30.790 }' 00:19:30.790 03:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.790 03:14:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.402 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:31.659 [2024-05-15 03:14:02.581740] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.659 [2024-05-15 03:14:02.581766] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.659 [2024-05-15 03:14:02.581818] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.659 [2024-05-15 03:14:02.581903] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.659 [2024-05-15 03:14:02.581914] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2486340 name raid_bdev1, state offline 00:19:31.660 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.660 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:31.916 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:31.916 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:31.916 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:31.916 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:31.916 03:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:32.174 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:32.174 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:32.174 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:32.432 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:32.432 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:32.432 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:32.690 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:32.690 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:32.690 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:32.690 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:32.690 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:32.948 [2024-05-15 03:14:03.857084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:32.948 [2024-05-15 03:14:03.857129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.948 [2024-05-15 03:14:03.857144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2485ba0 00:19:32.948 [2024-05-15 03:14:03.857153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.948 [2024-05-15 03:14:03.858822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.948 [2024-05-15 03:14:03.858847] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:32.948 [2024-05-15 03:14:03.858914] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:32.948 [2024-05-15 03:14:03.858938] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.948 pt2 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.948 03:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.207 03:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.207 "name": "raid_bdev1", 00:19:33.207 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:33.207 "strip_size_kb": 0, 00:19:33.207 "state": "configuring", 00:19:33.207 "raid_level": "raid1", 00:19:33.207 "superblock": true, 00:19:33.207 "num_base_bdevs": 4, 00:19:33.207 "num_base_bdevs_discovered": 1, 00:19:33.207 "num_base_bdevs_operational": 3, 00:19:33.207 "base_bdevs_list": [ 00:19:33.207 { 00:19:33.207 "name": null, 00:19:33.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.207 "is_configured": false, 00:19:33.207 "data_offset": 2048, 00:19:33.207 "data_size": 63488 00:19:33.207 }, 00:19:33.207 { 00:19:33.207 "name": "pt2", 00:19:33.207 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:33.207 "is_configured": true, 00:19:33.207 "data_offset": 2048, 00:19:33.207 "data_size": 63488 00:19:33.207 }, 00:19:33.207 { 00:19:33.207 "name": null, 00:19:33.207 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:33.207 "is_configured": false, 00:19:33.207 "data_offset": 2048, 00:19:33.207 "data_size": 63488 00:19:33.207 }, 00:19:33.207 { 00:19:33.207 "name": null, 00:19:33.207 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:33.207 "is_configured": false, 00:19:33.207 "data_offset": 2048, 00:19:33.207 "data_size": 63488 00:19:33.207 } 00:19:33.207 ] 00:19:33.207 }' 00:19:33.207 03:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.207 03:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.773 03:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:33.773 03:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:33.773 03:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:34.031 [2024-05-15 03:14:05.004184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:34.031 [2024-05-15 03:14:05.004226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.031 [2024-05-15 03:14:05.004242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2630320 00:19:34.031 [2024-05-15 03:14:05.004252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.031 [2024-05-15 03:14:05.004575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.031 [2024-05-15 03:14:05.004590] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:34.031 [2024-05-15 03:14:05.004644] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:34.031 [2024-05-15 03:14:05.004662] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:34.031 pt3 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.031 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.290 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.290 "name": "raid_bdev1", 00:19:34.290 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:34.290 "strip_size_kb": 0, 00:19:34.290 "state": "configuring", 00:19:34.290 "raid_level": "raid1", 00:19:34.290 "superblock": true, 00:19:34.290 "num_base_bdevs": 4, 00:19:34.290 "num_base_bdevs_discovered": 2, 00:19:34.290 "num_base_bdevs_operational": 3, 00:19:34.290 "base_bdevs_list": [ 00:19:34.290 { 00:19:34.290 "name": null, 00:19:34.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.290 "is_configured": false, 00:19:34.290 "data_offset": 2048, 00:19:34.290 "data_size": 63488 00:19:34.290 }, 00:19:34.290 { 00:19:34.290 "name": "pt2", 00:19:34.290 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:34.290 "is_configured": true, 00:19:34.290 "data_offset": 2048, 00:19:34.290 "data_size": 63488 00:19:34.290 }, 00:19:34.290 { 00:19:34.290 "name": "pt3", 00:19:34.290 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:34.290 "is_configured": true, 00:19:34.290 "data_offset": 2048, 00:19:34.290 "data_size": 63488 00:19:34.290 }, 00:19:34.290 { 00:19:34.290 "name": null, 00:19:34.290 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:34.290 "is_configured": false, 00:19:34.290 "data_offset": 2048, 00:19:34.290 "data_size": 63488 00:19:34.290 } 00:19:34.290 ] 00:19:34.290 }' 00:19:34.290 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.290 03:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:34.858 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:34.858 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:34.858 03:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:35.117 [2024-05-15 03:14:06.135235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:35.117 [2024-05-15 03:14:06.135280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.117 [2024-05-15 03:14:06.135297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24865e0 00:19:35.117 [2024-05-15 03:14:06.135306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.117 [2024-05-15 03:14:06.135637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.117 [2024-05-15 03:14:06.135652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:35.117 [2024-05-15 03:14:06.135709] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:35.117 [2024-05-15 03:14:06.135727] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:35.117 [2024-05-15 03:14:06.135839] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x263df80 00:19:35.118 [2024-05-15 03:14:06.135856] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:35.118 [2024-05-15 03:14:06.136033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2489090 00:19:35.118 [2024-05-15 03:14:06.136174] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x263df80 00:19:35.118 [2024-05-15 03:14:06.136182] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x263df80 00:19:35.118 [2024-05-15 03:14:06.136280] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.118 pt4 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.118 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.376 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.376 "name": "raid_bdev1", 00:19:35.376 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:35.376 "strip_size_kb": 0, 00:19:35.376 "state": "online", 00:19:35.376 "raid_level": "raid1", 00:19:35.376 "superblock": true, 00:19:35.376 "num_base_bdevs": 4, 00:19:35.376 "num_base_bdevs_discovered": 3, 00:19:35.376 "num_base_bdevs_operational": 3, 00:19:35.376 "base_bdevs_list": [ 00:19:35.376 { 00:19:35.376 "name": null, 00:19:35.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.376 "is_configured": false, 00:19:35.376 "data_offset": 2048, 00:19:35.376 "data_size": 63488 00:19:35.376 }, 00:19:35.376 { 00:19:35.376 "name": "pt2", 00:19:35.376 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:35.376 "is_configured": true, 00:19:35.376 "data_offset": 2048, 00:19:35.376 "data_size": 63488 00:19:35.376 }, 00:19:35.376 { 00:19:35.376 "name": "pt3", 00:19:35.376 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:35.376 "is_configured": true, 00:19:35.376 "data_offset": 2048, 00:19:35.376 "data_size": 63488 00:19:35.376 }, 00:19:35.376 { 00:19:35.376 "name": "pt4", 00:19:35.376 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:35.376 "is_configured": true, 00:19:35.376 "data_offset": 2048, 00:19:35.376 "data_size": 63488 00:19:35.376 } 00:19:35.376 ] 00:19:35.376 }' 00:19:35.376 03:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.376 03:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.981 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 4 -gt 2 ']' 00:19:35.981 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:36.266 [2024-05-15 03:14:07.262240] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.266 [2024-05-15 03:14:07.262264] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.266 [2024-05-15 03:14:07.262314] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.266 [2024-05-15 03:14:07.262383] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.267 [2024-05-15 03:14:07.262392] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x263df80 name raid_bdev1, state offline 00:19:36.267 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.267 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:19:36.525 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:19:36.525 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:19:36.525 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.784 [2024-05-15 03:14:07.767556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.784 [2024-05-15 03:14:07.767598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.784 [2024-05-15 03:14:07.767617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24852d0 00:19:36.784 [2024-05-15 03:14:07.767627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.784 [2024-05-15 03:14:07.769315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.784 [2024-05-15 03:14:07.769341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.784 [2024-05-15 03:14:07.769405] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:36.784 [2024-05-15 03:14:07.769430] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.784 pt1 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.784 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.043 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.043 "name": "raid_bdev1", 00:19:37.043 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:37.043 "strip_size_kb": 0, 00:19:37.043 "state": "configuring", 00:19:37.043 "raid_level": "raid1", 00:19:37.043 "superblock": true, 00:19:37.043 "num_base_bdevs": 4, 00:19:37.043 "num_base_bdevs_discovered": 1, 00:19:37.043 "num_base_bdevs_operational": 4, 00:19:37.043 "base_bdevs_list": [ 00:19:37.043 { 00:19:37.043 "name": "pt1", 00:19:37.043 "uuid": "2d2af25c-ef24-55e2-a927-b34266727ffc", 00:19:37.043 "is_configured": true, 00:19:37.043 "data_offset": 2048, 00:19:37.043 "data_size": 63488 00:19:37.043 }, 00:19:37.043 { 00:19:37.043 "name": null, 00:19:37.043 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:37.043 "is_configured": false, 00:19:37.043 "data_offset": 2048, 00:19:37.043 "data_size": 63488 00:19:37.043 }, 00:19:37.043 { 00:19:37.043 "name": null, 00:19:37.043 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:37.043 "is_configured": false, 00:19:37.043 "data_offset": 2048, 00:19:37.043 "data_size": 63488 00:19:37.043 }, 00:19:37.043 { 00:19:37.043 "name": null, 00:19:37.043 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:37.043 "is_configured": false, 00:19:37.043 "data_offset": 2048, 00:19:37.043 "data_size": 63488 00:19:37.043 } 00:19:37.043 ] 00:19:37.043 }' 00:19:37.043 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.043 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.610 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:19:37.610 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:19:37.610 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:37.869 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:19:37.869 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:19:37.869 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:38.127 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:19:38.127 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:19:38.127 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=3 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:38.386 [2024-05-15 03:14:09.512236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:38.386 [2024-05-15 03:14:09.512279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.386 [2024-05-15 03:14:09.512295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2485ba0 00:19:38.386 [2024-05-15 03:14:09.512305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.386 [2024-05-15 03:14:09.512645] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.386 [2024-05-15 03:14:09.512660] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:38.386 [2024-05-15 03:14:09.512718] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:38.386 [2024-05-15 03:14:09.512728] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:38.386 [2024-05-15 03:14:09.512734] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.386 [2024-05-15 03:14:09.512752] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26355f0 name raid_bdev1, state configuring 00:19:38.386 [2024-05-15 03:14:09.512780] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:38.386 pt4 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.386 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.645 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.645 "name": "raid_bdev1", 00:19:38.645 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:38.645 "strip_size_kb": 0, 00:19:38.645 "state": "configuring", 00:19:38.645 "raid_level": "raid1", 00:19:38.645 "superblock": true, 00:19:38.645 "num_base_bdevs": 4, 00:19:38.645 "num_base_bdevs_discovered": 1, 00:19:38.645 "num_base_bdevs_operational": 3, 00:19:38.645 "base_bdevs_list": [ 00:19:38.645 { 00:19:38.645 "name": null, 00:19:38.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.645 "is_configured": false, 00:19:38.645 "data_offset": 2048, 00:19:38.645 "data_size": 63488 00:19:38.645 }, 00:19:38.645 { 00:19:38.645 "name": null, 00:19:38.645 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:38.645 "is_configured": false, 00:19:38.645 "data_offset": 2048, 00:19:38.645 "data_size": 63488 00:19:38.645 }, 00:19:38.645 { 00:19:38.645 "name": null, 00:19:38.645 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:38.645 "is_configured": false, 00:19:38.645 "data_offset": 2048, 00:19:38.645 "data_size": 63488 00:19:38.645 }, 00:19:38.645 { 00:19:38.645 "name": "pt4", 00:19:38.645 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:38.645 "is_configured": true, 00:19:38.645 "data_offset": 2048, 00:19:38.645 "data_size": 63488 00:19:38.645 } 00:19:38.645 ] 00:19:38.645 }' 00:19:38.645 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.645 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.580 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:19:39.580 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:19:39.580 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:39.580 [2024-05-15 03:14:10.655400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:39.580 [2024-05-15 03:14:10.655451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.580 [2024-05-15 03:14:10.655470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24865e0 00:19:39.580 [2024-05-15 03:14:10.655480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.581 [2024-05-15 03:14:10.655819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.581 [2024-05-15 03:14:10.655835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:39.581 [2024-05-15 03:14:10.655905] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:39.581 [2024-05-15 03:14:10.655924] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.581 pt2 00:19:39.581 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:19:39.581 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:19:39.581 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:39.838 [2024-05-15 03:14:10.908208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:39.838 [2024-05-15 03:14:10.908243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.838 [2024-05-15 03:14:10.908258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2630320 00:19:39.838 [2024-05-15 03:14:10.908267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.838 [2024-05-15 03:14:10.908559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.838 [2024-05-15 03:14:10.908573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:39.838 [2024-05-15 03:14:10.908623] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:39.838 [2024-05-15 03:14:10.908640] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:39.838 [2024-05-15 03:14:10.908751] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x248cc30 00:19:39.838 [2024-05-15 03:14:10.908760] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:39.838 [2024-05-15 03:14:10.908948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2487ac0 00:19:39.838 [2024-05-15 03:14:10.909087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248cc30 00:19:39.838 [2024-05-15 03:14:10.909095] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x248cc30 00:19:39.838 [2024-05-15 03:14:10.909194] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.838 pt3 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.838 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.095 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.095 "name": "raid_bdev1", 00:19:40.095 "uuid": "1b54b816-c98a-443e-addb-bcf9aafde847", 00:19:40.095 "strip_size_kb": 0, 00:19:40.095 "state": "online", 00:19:40.095 "raid_level": "raid1", 00:19:40.095 "superblock": true, 00:19:40.095 "num_base_bdevs": 4, 00:19:40.095 "num_base_bdevs_discovered": 3, 00:19:40.095 "num_base_bdevs_operational": 3, 00:19:40.095 "base_bdevs_list": [ 00:19:40.095 { 00:19:40.095 "name": null, 00:19:40.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.095 "is_configured": false, 00:19:40.095 "data_offset": 2048, 00:19:40.095 "data_size": 63488 00:19:40.095 }, 00:19:40.095 { 00:19:40.095 "name": "pt2", 00:19:40.095 "uuid": "e4c6f003-380b-597b-81e9-cdeec199d783", 00:19:40.095 "is_configured": true, 00:19:40.095 "data_offset": 2048, 00:19:40.095 "data_size": 63488 00:19:40.095 }, 00:19:40.095 { 00:19:40.095 "name": "pt3", 00:19:40.095 "uuid": "8965618a-5e71-5b59-a061-f24a40fbc5f8", 00:19:40.095 "is_configured": true, 00:19:40.095 "data_offset": 2048, 00:19:40.096 "data_size": 63488 00:19:40.096 }, 00:19:40.096 { 00:19:40.096 "name": "pt4", 00:19:40.096 "uuid": "a96989d1-baf5-5e69-a52b-1706364d8618", 00:19:40.096 "is_configured": true, 00:19:40.096 "data_offset": 2048, 00:19:40.096 "data_size": 63488 00:19:40.096 } 00:19:40.096 ] 00:19:40.096 }' 00:19:40.096 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.096 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.661 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:19:40.662 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:40.920 [2024-05-15 03:14:11.983340] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 1b54b816-c98a-443e-addb-bcf9aafde847 '!=' 1b54b816-c98a-443e-addb-bcf9aafde847 ']' 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 4145803 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 4145803 ']' 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 4145803 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4145803 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4145803' 00:19:40.920 killing process with pid 4145803 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 4145803 00:19:40.920 [2024-05-15 03:14:12.055347] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.920 [2024-05-15 03:14:12.055401] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.920 [2024-05-15 03:14:12.055463] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.920 [2024-05-15 03:14:12.055471] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248cc30 name raid_bdev1, state offline 00:19:40.920 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 4145803 00:19:41.179 [2024-05-15 03:14:12.089534] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.179 03:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:19:41.179 00:19:41.179 real 0m27.709s 00:19:41.179 user 0m51.819s 00:19:41.179 sys 0m3.780s 00:19:41.179 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:41.179 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.179 ************************************ 00:19:41.179 END TEST raid_superblock_test 00:19:41.179 ************************************ 00:19:41.438 03:14:12 bdev_raid -- bdev/bdev_raid.sh@821 -- # '[' true = true ']' 00:19:41.438 03:14:12 bdev_raid -- bdev/bdev_raid.sh@822 -- # for n in 2 4 00:19:41.438 03:14:12 bdev_raid -- bdev/bdev_raid.sh@823 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:19:41.438 03:14:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:19:41.438 03:14:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:41.438 03:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:41.438 ************************************ 00:19:41.438 START TEST raid_rebuild_test 00:19:41.438 ************************************ 00:19:41.438 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false false true 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local superblock=false 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local verify=true 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # local strip_size 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@582 -- # local create_arg 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local data_offset 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # raid_pid=4150779 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # waitforlisten 4150779 /var/tmp/spdk-raid.sock 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 4150779 ']' 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:41.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:41.439 03:14:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.439 [2024-05-15 03:14:12.452599] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:41.439 [2024-05-15 03:14:12.452654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150779 ] 00:19:41.439 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:41.439 Zero copy mechanism will not be used. 00:19:41.439 [2024-05-15 03:14:12.552471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.697 [2024-05-15 03:14:12.647080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.697 [2024-05-15 03:14:12.707251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.697 [2024-05-15 03:14:12.707303] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.263 03:14:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:42.263 03:14:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:19:42.263 03:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:19:42.263 03:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:42.519 BaseBdev1_malloc 00:19:42.519 03:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:42.777 [2024-05-15 03:14:13.900708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:42.777 [2024-05-15 03:14:13.900754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.777 [2024-05-15 03:14:13.900775] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2123b00 00:19:42.777 [2024-05-15 03:14:13.900786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.777 [2024-05-15 03:14:13.902504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.777 [2024-05-15 03:14:13.902532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:42.777 BaseBdev1 00:19:42.777 03:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:19:42.777 03:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.035 BaseBdev2_malloc 00:19:43.035 03:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:43.292 [2024-05-15 03:14:14.406609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:43.293 [2024-05-15 03:14:14.406650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.293 [2024-05-15 03:14:14.406666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c9860 00:19:43.293 [2024-05-15 03:14:14.406676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.293 [2024-05-15 03:14:14.408227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.293 [2024-05-15 03:14:14.408254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.293 BaseBdev2 00:19:43.293 03:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:43.550 spare_malloc 00:19:43.550 03:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.806 spare_delay 00:19:43.807 03:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:44.064 [2024-05-15 03:14:15.173103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:44.065 [2024-05-15 03:14:15.173145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.065 [2024-05-15 03:14:15.173166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c9f50 00:19:44.065 [2024-05-15 03:14:15.173176] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.065 [2024-05-15 03:14:15.174778] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.065 [2024-05-15 03:14:15.174805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:44.065 spare 00:19:44.065 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:44.322 [2024-05-15 03:14:15.425787] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.322 [2024-05-15 03:14:15.427282] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.322 [2024-05-15 03:14:15.427366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x211c460 00:19:44.322 [2024-05-15 03:14:15.427376] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:44.323 [2024-05-15 03:14:15.427583] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x211c140 00:19:44.323 [2024-05-15 03:14:15.427733] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x211c460 00:19:44.323 [2024-05-15 03:14:15.427742] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x211c460 00:19:44.323 [2024-05-15 03:14:15.427865] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.323 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.580 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.580 "name": "raid_bdev1", 00:19:44.580 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:44.580 "strip_size_kb": 0, 00:19:44.580 "state": "online", 00:19:44.580 "raid_level": "raid1", 00:19:44.580 "superblock": false, 00:19:44.580 "num_base_bdevs": 2, 00:19:44.580 "num_base_bdevs_discovered": 2, 00:19:44.580 "num_base_bdevs_operational": 2, 00:19:44.580 "base_bdevs_list": [ 00:19:44.580 { 00:19:44.580 "name": "BaseBdev1", 00:19:44.580 "uuid": "229a2374-ec3b-518f-ad72-37ee5770c9c6", 00:19:44.580 "is_configured": true, 00:19:44.580 "data_offset": 0, 00:19:44.580 "data_size": 65536 00:19:44.580 }, 00:19:44.580 { 00:19:44.580 "name": "BaseBdev2", 00:19:44.580 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:44.580 "is_configured": true, 00:19:44.580 "data_offset": 0, 00:19:44.580 "data_size": 65536 00:19:44.580 } 00:19:44.580 ] 00:19:44.580 }' 00:19:44.580 03:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.580 03:14:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.512 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:45.512 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:19:45.512 [2024-05-15 03:14:16.536981] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.512 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=65536 00:19:45.512 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.512 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # data_offset=0 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.770 03:14:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:46.029 [2024-05-15 03:14:17.058170] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x211a4e0 00:19:46.029 /dev/nbd0 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.029 1+0 records in 00:19:46.029 1+0 records out 00:19:46.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180759 s, 22.7 MB/s 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:19:46.029 03:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:51.297 65536+0 records in 00:19:51.297 65536+0 records out 00:19:51.297 33554432 bytes (34 MB, 32 MiB) copied, 5.08277 s, 6.6 MB/s 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.297 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:51.555 [2024-05-15 03:14:22.463559] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.555 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:51.555 [2024-05-15 03:14:22.708254] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.813 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.071 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.071 "name": "raid_bdev1", 00:19:52.071 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:52.071 "strip_size_kb": 0, 00:19:52.071 "state": "online", 00:19:52.071 "raid_level": "raid1", 00:19:52.071 "superblock": false, 00:19:52.071 "num_base_bdevs": 2, 00:19:52.071 "num_base_bdevs_discovered": 1, 00:19:52.071 "num_base_bdevs_operational": 1, 00:19:52.071 "base_bdevs_list": [ 00:19:52.071 { 00:19:52.071 "name": null, 00:19:52.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.071 "is_configured": false, 00:19:52.071 "data_offset": 0, 00:19:52.071 "data_size": 65536 00:19:52.071 }, 00:19:52.071 { 00:19:52.071 "name": "BaseBdev2", 00:19:52.071 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:52.071 "is_configured": true, 00:19:52.071 "data_offset": 0, 00:19:52.071 "data_size": 65536 00:19:52.071 } 00:19:52.071 ] 00:19:52.071 }' 00:19:52.071 03:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.071 03:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.635 03:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:52.892 [2024-05-15 03:14:23.851337] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.892 [2024-05-15 03:14:23.856104] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e2c1e0 00:19:52.892 [2024-05-15 03:14:23.858413] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.892 03:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # sleep 1 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.873 03:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.131 "name": "raid_bdev1", 00:19:54.131 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:54.131 "strip_size_kb": 0, 00:19:54.131 "state": "online", 00:19:54.131 "raid_level": "raid1", 00:19:54.131 "superblock": false, 00:19:54.131 "num_base_bdevs": 2, 00:19:54.131 "num_base_bdevs_discovered": 2, 00:19:54.131 "num_base_bdevs_operational": 2, 00:19:54.131 "process": { 00:19:54.131 "type": "rebuild", 00:19:54.131 "target": "spare", 00:19:54.131 "progress": { 00:19:54.131 "blocks": 22528, 00:19:54.131 "percent": 34 00:19:54.131 } 00:19:54.131 }, 00:19:54.131 "base_bdevs_list": [ 00:19:54.131 { 00:19:54.131 "name": "spare", 00:19:54.131 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:19:54.131 "is_configured": true, 00:19:54.131 "data_offset": 0, 00:19:54.131 "data_size": 65536 00:19:54.131 }, 00:19:54.131 { 00:19:54.131 "name": "BaseBdev2", 00:19:54.131 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:54.131 "is_configured": true, 00:19:54.131 "data_offset": 0, 00:19:54.131 "data_size": 65536 00:19:54.131 } 00:19:54.131 ] 00:19:54.131 }' 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.131 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:54.389 [2024-05-15 03:14:25.384242] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.389 [2024-05-15 03:14:25.470729] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.389 [2024-05-15 03:14:25.470775] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.389 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.647 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.647 "name": "raid_bdev1", 00:19:54.647 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:54.647 "strip_size_kb": 0, 00:19:54.647 "state": "online", 00:19:54.647 "raid_level": "raid1", 00:19:54.647 "superblock": false, 00:19:54.647 "num_base_bdevs": 2, 00:19:54.647 "num_base_bdevs_discovered": 1, 00:19:54.647 "num_base_bdevs_operational": 1, 00:19:54.647 "base_bdevs_list": [ 00:19:54.647 { 00:19:54.647 "name": null, 00:19:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.647 "is_configured": false, 00:19:54.647 "data_offset": 0, 00:19:54.647 "data_size": 65536 00:19:54.647 }, 00:19:54.647 { 00:19:54.647 "name": "BaseBdev2", 00:19:54.647 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:54.647 "is_configured": true, 00:19:54.647 "data_offset": 0, 00:19:54.647 "data_size": 65536 00:19:54.647 } 00:19:54.647 ] 00:19:54.647 }' 00:19:54.647 03:14:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.647 03:14:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.212 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.471 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.471 "name": "raid_bdev1", 00:19:55.471 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:55.471 "strip_size_kb": 0, 00:19:55.471 "state": "online", 00:19:55.471 "raid_level": "raid1", 00:19:55.471 "superblock": false, 00:19:55.471 "num_base_bdevs": 2, 00:19:55.471 "num_base_bdevs_discovered": 1, 00:19:55.471 "num_base_bdevs_operational": 1, 00:19:55.471 "base_bdevs_list": [ 00:19:55.471 { 00:19:55.471 "name": null, 00:19:55.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.471 "is_configured": false, 00:19:55.471 "data_offset": 0, 00:19:55.471 "data_size": 65536 00:19:55.471 }, 00:19:55.471 { 00:19:55.471 "name": "BaseBdev2", 00:19:55.471 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:55.471 "is_configured": true, 00:19:55.471 "data_offset": 0, 00:19:55.471 "data_size": 65536 00:19:55.471 } 00:19:55.471 ] 00:19:55.471 }' 00:19:55.471 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.729 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:55.729 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.729 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:55.729 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.987 [2024-05-15 03:14:26.955097] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.987 [2024-05-15 03:14:26.959830] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2120860 00:19:55.987 [2024-05-15 03:14:26.961342] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.987 03:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # sleep 1 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.922 03:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.180 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:57.180 "name": "raid_bdev1", 00:19:57.180 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:57.180 "strip_size_kb": 0, 00:19:57.180 "state": "online", 00:19:57.181 "raid_level": "raid1", 00:19:57.181 "superblock": false, 00:19:57.181 "num_base_bdevs": 2, 00:19:57.181 "num_base_bdevs_discovered": 2, 00:19:57.181 "num_base_bdevs_operational": 2, 00:19:57.181 "process": { 00:19:57.181 "type": "rebuild", 00:19:57.181 "target": "spare", 00:19:57.181 "progress": { 00:19:57.181 "blocks": 24576, 00:19:57.181 "percent": 37 00:19:57.181 } 00:19:57.181 }, 00:19:57.181 "base_bdevs_list": [ 00:19:57.181 { 00:19:57.181 "name": "spare", 00:19:57.181 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:19:57.181 "is_configured": true, 00:19:57.181 "data_offset": 0, 00:19:57.181 "data_size": 65536 00:19:57.181 }, 00:19:57.181 { 00:19:57.181 "name": "BaseBdev2", 00:19:57.181 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:57.181 "is_configured": true, 00:19:57.181 "data_offset": 0, 00:19:57.181 "data_size": 65536 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 }' 00:19:57.181 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:57.181 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.181 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # '[' false = true ']' 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # local timeout=645 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.439 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:57.698 "name": "raid_bdev1", 00:19:57.698 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:57.698 "strip_size_kb": 0, 00:19:57.698 "state": "online", 00:19:57.698 "raid_level": "raid1", 00:19:57.698 "superblock": false, 00:19:57.698 "num_base_bdevs": 2, 00:19:57.698 "num_base_bdevs_discovered": 2, 00:19:57.698 "num_base_bdevs_operational": 2, 00:19:57.698 "process": { 00:19:57.698 "type": "rebuild", 00:19:57.698 "target": "spare", 00:19:57.698 "progress": { 00:19:57.698 "blocks": 32768, 00:19:57.698 "percent": 50 00:19:57.698 } 00:19:57.698 }, 00:19:57.698 "base_bdevs_list": [ 00:19:57.698 { 00:19:57.698 "name": "spare", 00:19:57.698 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:19:57.698 "is_configured": true, 00:19:57.698 "data_offset": 0, 00:19:57.698 "data_size": 65536 00:19:57.698 }, 00:19:57.698 { 00:19:57.698 "name": "BaseBdev2", 00:19:57.698 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:57.698 "is_configured": true, 00:19:57.698 "data_offset": 0, 00:19:57.698 "data_size": 65536 00:19:57.698 } 00:19:57.698 ] 00:19:57.698 }' 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.698 03:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # sleep 1 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.633 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.892 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.892 "name": "raid_bdev1", 00:19:58.892 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:19:58.892 "strip_size_kb": 0, 00:19:58.892 "state": "online", 00:19:58.892 "raid_level": "raid1", 00:19:58.892 "superblock": false, 00:19:58.892 "num_base_bdevs": 2, 00:19:58.892 "num_base_bdevs_discovered": 2, 00:19:58.892 "num_base_bdevs_operational": 2, 00:19:58.892 "process": { 00:19:58.892 "type": "rebuild", 00:19:58.892 "target": "spare", 00:19:58.892 "progress": { 00:19:58.892 "blocks": 59392, 00:19:58.892 "percent": 90 00:19:58.892 } 00:19:58.892 }, 00:19:58.892 "base_bdevs_list": [ 00:19:58.892 { 00:19:58.892 "name": "spare", 00:19:58.892 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:19:58.892 "is_configured": true, 00:19:58.892 "data_offset": 0, 00:19:58.892 "data_size": 65536 00:19:58.892 }, 00:19:58.892 { 00:19:58.892 "name": "BaseBdev2", 00:19:58.892 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:19:58.892 "is_configured": true, 00:19:58.892 "data_offset": 0, 00:19:58.892 "data_size": 65536 00:19:58.892 } 00:19:58.892 ] 00:19:58.892 }' 00:19:58.892 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.892 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.892 03:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.892 03:14:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.892 03:14:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # sleep 1 00:19:59.150 [2024-05-15 03:14:30.185475] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:59.150 [2024-05-15 03:14:30.185532] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:59.150 [2024-05-15 03:14:30.185571] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.086 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:00.343 "name": "raid_bdev1", 00:20:00.343 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:20:00.343 "strip_size_kb": 0, 00:20:00.343 "state": "online", 00:20:00.343 "raid_level": "raid1", 00:20:00.343 "superblock": false, 00:20:00.343 "num_base_bdevs": 2, 00:20:00.343 "num_base_bdevs_discovered": 2, 00:20:00.343 "num_base_bdevs_operational": 2, 00:20:00.343 "base_bdevs_list": [ 00:20:00.343 { 00:20:00.343 "name": "spare", 00:20:00.343 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:20:00.343 "is_configured": true, 00:20:00.343 "data_offset": 0, 00:20:00.343 "data_size": 65536 00:20:00.343 }, 00:20:00.343 { 00:20:00.343 "name": "BaseBdev2", 00:20:00.343 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:20:00.343 "is_configured": true, 00:20:00.343 "data_offset": 0, 00:20:00.343 "data_size": 65536 00:20:00.343 } 00:20:00.343 ] 00:20:00.343 }' 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # break 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.343 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:00.601 "name": "raid_bdev1", 00:20:00.601 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:20:00.601 "strip_size_kb": 0, 00:20:00.601 "state": "online", 00:20:00.601 "raid_level": "raid1", 00:20:00.601 "superblock": false, 00:20:00.601 "num_base_bdevs": 2, 00:20:00.601 "num_base_bdevs_discovered": 2, 00:20:00.601 "num_base_bdevs_operational": 2, 00:20:00.601 "base_bdevs_list": [ 00:20:00.601 { 00:20:00.601 "name": "spare", 00:20:00.601 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:20:00.601 "is_configured": true, 00:20:00.601 "data_offset": 0, 00:20:00.601 "data_size": 65536 00:20:00.601 }, 00:20:00.601 { 00:20:00.601 "name": "BaseBdev2", 00:20:00.601 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:20:00.601 "is_configured": true, 00:20:00.601 "data_offset": 0, 00:20:00.601 "data_size": 65536 00:20:00.601 } 00:20:00.601 ] 00:20:00.601 }' 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.601 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.859 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.859 "name": "raid_bdev1", 00:20:00.859 "uuid": "cfe8e6b5-b9d4-4b34-9a64-f44e472baf53", 00:20:00.859 "strip_size_kb": 0, 00:20:00.859 "state": "online", 00:20:00.859 "raid_level": "raid1", 00:20:00.859 "superblock": false, 00:20:00.859 "num_base_bdevs": 2, 00:20:00.859 "num_base_bdevs_discovered": 2, 00:20:00.859 "num_base_bdevs_operational": 2, 00:20:00.859 "base_bdevs_list": [ 00:20:00.859 { 00:20:00.859 "name": "spare", 00:20:00.859 "uuid": "47a0bd13-9f69-555a-9c9d-26f0bee62680", 00:20:00.859 "is_configured": true, 00:20:00.859 "data_offset": 0, 00:20:00.859 "data_size": 65536 00:20:00.859 }, 00:20:00.859 { 00:20:00.859 "name": "BaseBdev2", 00:20:00.859 "uuid": "2e92bd01-2611-53c5-b02f-92b129ff1f29", 00:20:00.859 "is_configured": true, 00:20:00.859 "data_offset": 0, 00:20:00.859 "data_size": 65536 00:20:00.859 } 00:20:00.859 ] 00:20:00.859 }' 00:20:00.859 03:14:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.859 03:14:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.793 03:14:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:01.793 [2024-05-15 03:14:32.845227] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.793 [2024-05-15 03:14:32.845252] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.793 [2024-05-15 03:14:32.845311] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.793 [2024-05-15 03:14:32.845368] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.793 [2024-05-15 03:14:32.845377] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x211c460 name raid_bdev1, state offline 00:20:01.793 03:14:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.793 03:14:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # jq length 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:02.051 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:02.310 /dev/nbd0 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.310 1+0 records in 00:20:02.310 1+0 records out 00:20:02.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199176 s, 20.6 MB/s 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:02.310 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:02.568 /dev/nbd1 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.568 1+0 records in 00:20:02.568 1+0 records out 00:20:02.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245891 s, 16.7 MB/s 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:02.568 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.825 03:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.083 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@748 -- # '[' false = true ']' 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@795 -- # killprocess 4150779 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 4150779 ']' 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 4150779 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4150779 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4150779' 00:20:03.342 killing process with pid 4150779 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 4150779 00:20:03.342 Received shutdown signal, test time was about 60.000000 seconds 00:20:03.342 00:20:03.342 Latency(us) 00:20:03.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.342 =================================================================================================================== 00:20:03.342 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.342 [2024-05-15 03:14:34.357258] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.342 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 4150779 00:20:03.342 [2024-05-15 03:14:34.382160] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@797 -- # return 0 00:20:03.601 00:20:03.601 real 0m22.215s 00:20:03.601 user 0m31.204s 00:20:03.601 sys 0m3.849s 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.601 ************************************ 00:20:03.601 END TEST raid_rebuild_test 00:20:03.601 ************************************ 00:20:03.601 03:14:34 bdev_raid -- bdev/bdev_raid.sh@824 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:20:03.601 03:14:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:20:03.601 03:14:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:03.601 03:14:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.601 ************************************ 00:20:03.601 START TEST raid_rebuild_test_sb 00:20:03.601 ************************************ 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:20:03.601 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local verify=true 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # local strip_size 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # local create_arg 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local data_offset 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # raid_pid=4154619 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # waitforlisten 4154619 /var/tmp/spdk-raid.sock 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4154619 ']' 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:03.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.602 03:14:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.602 [2024-05-15 03:14:34.734138] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:03.602 [2024-05-15 03:14:34.734172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154619 ] 00:20:03.602 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:03.602 Zero copy mechanism will not be used. 00:20:03.862 [2024-05-15 03:14:34.819302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.862 [2024-05-15 03:14:34.910168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.862 [2024-05-15 03:14:34.977192] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.862 [2024-05-15 03:14:34.977229] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.121 03:14:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.121 03:14:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:04.121 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:04.121 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.121 BaseBdev1_malloc 00:20:04.379 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.379 [2024-05-15 03:14:35.521521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.379 [2024-05-15 03:14:35.521564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.379 [2024-05-15 03:14:35.521581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1107b00 00:20:04.379 [2024-05-15 03:14:35.521590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.379 [2024-05-15 03:14:35.523214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.379 [2024-05-15 03:14:35.523240] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.379 BaseBdev1 00:20:04.637 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:04.638 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.638 BaseBdev2_malloc 00:20:04.896 03:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:04.896 [2024-05-15 03:14:36.035303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:04.896 [2024-05-15 03:14:36.035344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.896 [2024-05-15 03:14:36.035359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ad860 00:20:04.896 [2024-05-15 03:14:36.035368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.896 [2024-05-15 03:14:36.036818] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.896 [2024-05-15 03:14:36.036844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:04.896 BaseBdev2 00:20:05.155 03:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:05.156 spare_malloc 00:20:05.414 03:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:05.414 spare_delay 00:20:05.673 03:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:05.673 [2024-05-15 03:14:36.809718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:05.673 [2024-05-15 03:14:36.809756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.673 [2024-05-15 03:14:36.809772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12adf50 00:20:05.673 [2024-05-15 03:14:36.809781] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.673 [2024-05-15 03:14:36.811265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.673 [2024-05-15 03:14:36.811290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:05.673 spare 00:20:05.931 03:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:05.931 [2024-05-15 03:14:37.062414] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.931 [2024-05-15 03:14:37.063639] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.931 [2024-05-15 03:14:37.063791] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1100460 00:20:05.931 [2024-05-15 03:14:37.063803] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:05.931 [2024-05-15 03:14:37.063992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ac8e0 00:20:05.931 [2024-05-15 03:14:37.064136] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1100460 00:20:05.931 [2024-05-15 03:14:37.064148] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1100460 00:20:05.931 [2024-05-15 03:14:37.064242] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.931 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.189 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.189 "name": "raid_bdev1", 00:20:06.189 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:06.189 "strip_size_kb": 0, 00:20:06.189 "state": "online", 00:20:06.189 "raid_level": "raid1", 00:20:06.189 "superblock": true, 00:20:06.189 "num_base_bdevs": 2, 00:20:06.189 "num_base_bdevs_discovered": 2, 00:20:06.189 "num_base_bdevs_operational": 2, 00:20:06.189 "base_bdevs_list": [ 00:20:06.189 { 00:20:06.189 "name": "BaseBdev1", 00:20:06.189 "uuid": "773f4273-6691-5915-989a-be1201c40577", 00:20:06.189 "is_configured": true, 00:20:06.189 "data_offset": 2048, 00:20:06.189 "data_size": 63488 00:20:06.189 }, 00:20:06.189 { 00:20:06.189 "name": "BaseBdev2", 00:20:06.189 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:06.189 "is_configured": true, 00:20:06.189 "data_offset": 2048, 00:20:06.189 "data_size": 63488 00:20:06.189 } 00:20:06.189 ] 00:20:06.189 }' 00:20:06.189 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.189 03:14:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.124 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:07.124 03:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:20:07.124 [2024-05-15 03:14:38.197674] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.124 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=63488 00:20:07.124 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.124 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # data_offset=2048 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.383 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:07.641 [2024-05-15 03:14:38.714885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10fe920 00:20:07.641 /dev/nbd0 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:07.641 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:07.641 1+0 records in 00:20:07.641 1+0 records out 00:20:07.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001997 s, 20.5 MB/s 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:20:07.642 03:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:12.952 63488+0 records in 00:20:12.952 63488+0 records out 00:20:12.952 32505856 bytes (33 MB, 31 MiB) copied, 4.91339 s, 6.6 MB/s 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:12.952 [2024-05-15 03:14:43.957003] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.952 03:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:13.210 [2024-05-15 03:14:44.201258] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.210 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.467 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.467 "name": "raid_bdev1", 00:20:13.468 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:13.468 "strip_size_kb": 0, 00:20:13.468 "state": "online", 00:20:13.468 "raid_level": "raid1", 00:20:13.468 "superblock": true, 00:20:13.468 "num_base_bdevs": 2, 00:20:13.468 "num_base_bdevs_discovered": 1, 00:20:13.468 "num_base_bdevs_operational": 1, 00:20:13.468 "base_bdevs_list": [ 00:20:13.468 { 00:20:13.468 "name": null, 00:20:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.468 "is_configured": false, 00:20:13.468 "data_offset": 2048, 00:20:13.468 "data_size": 63488 00:20:13.468 }, 00:20:13.468 { 00:20:13.468 "name": "BaseBdev2", 00:20:13.468 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:13.468 "is_configured": true, 00:20:13.468 "data_offset": 2048, 00:20:13.468 "data_size": 63488 00:20:13.468 } 00:20:13.468 ] 00:20:13.468 }' 00:20:13.468 03:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.468 03:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.033 03:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.290 [2024-05-15 03:14:45.340321] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.290 [2024-05-15 03:14:45.345204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10feb50 00:20:14.290 [2024-05-15 03:14:45.347303] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.290 03:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # sleep 1 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.224 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.481 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:15.481 "name": "raid_bdev1", 00:20:15.481 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:15.481 "strip_size_kb": 0, 00:20:15.481 "state": "online", 00:20:15.481 "raid_level": "raid1", 00:20:15.481 "superblock": true, 00:20:15.481 "num_base_bdevs": 2, 00:20:15.481 "num_base_bdevs_discovered": 2, 00:20:15.481 "num_base_bdevs_operational": 2, 00:20:15.481 "process": { 00:20:15.481 "type": "rebuild", 00:20:15.481 "target": "spare", 00:20:15.481 "progress": { 00:20:15.481 "blocks": 24576, 00:20:15.481 "percent": 38 00:20:15.481 } 00:20:15.481 }, 00:20:15.481 "base_bdevs_list": [ 00:20:15.481 { 00:20:15.481 "name": "spare", 00:20:15.481 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:15.481 "is_configured": true, 00:20:15.481 "data_offset": 2048, 00:20:15.481 "data_size": 63488 00:20:15.481 }, 00:20:15.481 { 00:20:15.481 "name": "BaseBdev2", 00:20:15.481 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:15.481 "is_configured": true, 00:20:15.481 "data_offset": 2048, 00:20:15.481 "data_size": 63488 00:20:15.481 } 00:20:15.481 ] 00:20:15.481 }' 00:20:15.481 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:15.739 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.739 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:15.739 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.739 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:15.997 [2024-05-15 03:14:46.937766] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.997 [2024-05-15 03:14:46.959606] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.997 [2024-05-15 03:14:46.959650] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.997 03:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.255 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.255 "name": "raid_bdev1", 00:20:16.255 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:16.255 "strip_size_kb": 0, 00:20:16.255 "state": "online", 00:20:16.255 "raid_level": "raid1", 00:20:16.255 "superblock": true, 00:20:16.255 "num_base_bdevs": 2, 00:20:16.255 "num_base_bdevs_discovered": 1, 00:20:16.255 "num_base_bdevs_operational": 1, 00:20:16.255 "base_bdevs_list": [ 00:20:16.255 { 00:20:16.255 "name": null, 00:20:16.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.255 "is_configured": false, 00:20:16.255 "data_offset": 2048, 00:20:16.255 "data_size": 63488 00:20:16.255 }, 00:20:16.255 { 00:20:16.255 "name": "BaseBdev2", 00:20:16.255 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:16.255 "is_configured": true, 00:20:16.255 "data_offset": 2048, 00:20:16.255 "data_size": 63488 00:20:16.255 } 00:20:16.255 ] 00:20:16.255 }' 00:20:16.255 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.255 03:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.822 03:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.079 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:17.079 "name": "raid_bdev1", 00:20:17.079 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:17.079 "strip_size_kb": 0, 00:20:17.079 "state": "online", 00:20:17.079 "raid_level": "raid1", 00:20:17.079 "superblock": true, 00:20:17.079 "num_base_bdevs": 2, 00:20:17.079 "num_base_bdevs_discovered": 1, 00:20:17.079 "num_base_bdevs_operational": 1, 00:20:17.079 "base_bdevs_list": [ 00:20:17.079 { 00:20:17.079 "name": null, 00:20:17.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.079 "is_configured": false, 00:20:17.079 "data_offset": 2048, 00:20:17.079 "data_size": 63488 00:20:17.079 }, 00:20:17.079 { 00:20:17.079 "name": "BaseBdev2", 00:20:17.079 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:17.079 "is_configured": true, 00:20:17.079 "data_offset": 2048, 00:20:17.079 "data_size": 63488 00:20:17.079 } 00:20:17.079 ] 00:20:17.079 }' 00:20:17.079 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:17.080 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:17.080 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:17.080 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:17.080 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.337 [2024-05-15 03:14:48.447992] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.337 [2024-05-15 03:14:48.452817] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xde3860 00:20:17.337 [2024-05-15 03:14:48.454347] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.337 03:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # sleep 1 00:20:18.709 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.709 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.709 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.710 "name": "raid_bdev1", 00:20:18.710 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:18.710 "strip_size_kb": 0, 00:20:18.710 "state": "online", 00:20:18.710 "raid_level": "raid1", 00:20:18.710 "superblock": true, 00:20:18.710 "num_base_bdevs": 2, 00:20:18.710 "num_base_bdevs_discovered": 2, 00:20:18.710 "num_base_bdevs_operational": 2, 00:20:18.710 "process": { 00:20:18.710 "type": "rebuild", 00:20:18.710 "target": "spare", 00:20:18.710 "progress": { 00:20:18.710 "blocks": 24576, 00:20:18.710 "percent": 38 00:20:18.710 } 00:20:18.710 }, 00:20:18.710 "base_bdevs_list": [ 00:20:18.710 { 00:20:18.710 "name": "spare", 00:20:18.710 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:18.710 "is_configured": true, 00:20:18.710 "data_offset": 2048, 00:20:18.710 "data_size": 63488 00:20:18.710 }, 00:20:18.710 { 00:20:18.710 "name": "BaseBdev2", 00:20:18.710 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:18.710 "is_configured": true, 00:20:18.710 "data_offset": 2048, 00:20:18.710 "data_size": 63488 00:20:18.710 } 00:20:18.710 ] 00:20:18.710 }' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:20:18.710 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # local timeout=666 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.710 03:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.967 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.967 "name": "raid_bdev1", 00:20:18.967 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:18.967 "strip_size_kb": 0, 00:20:18.967 "state": "online", 00:20:18.967 "raid_level": "raid1", 00:20:18.967 "superblock": true, 00:20:18.967 "num_base_bdevs": 2, 00:20:18.967 "num_base_bdevs_discovered": 2, 00:20:18.967 "num_base_bdevs_operational": 2, 00:20:18.967 "process": { 00:20:18.967 "type": "rebuild", 00:20:18.967 "target": "spare", 00:20:18.967 "progress": { 00:20:18.967 "blocks": 30720, 00:20:18.967 "percent": 48 00:20:18.967 } 00:20:18.967 }, 00:20:18.967 "base_bdevs_list": [ 00:20:18.967 { 00:20:18.967 "name": "spare", 00:20:18.967 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:18.968 "is_configured": true, 00:20:18.968 "data_offset": 2048, 00:20:18.968 "data_size": 63488 00:20:18.968 }, 00:20:18.968 { 00:20:18.968 "name": "BaseBdev2", 00:20:18.968 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:18.968 "is_configured": true, 00:20:18.968 "data_offset": 2048, 00:20:18.968 "data_size": 63488 00:20:18.968 } 00:20:18.968 ] 00:20:18.968 }' 00:20:18.968 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:19.225 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.225 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:19.225 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.225 03:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # sleep 1 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.157 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.414 "name": "raid_bdev1", 00:20:20.414 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:20.414 "strip_size_kb": 0, 00:20:20.414 "state": "online", 00:20:20.414 "raid_level": "raid1", 00:20:20.414 "superblock": true, 00:20:20.414 "num_base_bdevs": 2, 00:20:20.414 "num_base_bdevs_discovered": 2, 00:20:20.414 "num_base_bdevs_operational": 2, 00:20:20.414 "process": { 00:20:20.414 "type": "rebuild", 00:20:20.414 "target": "spare", 00:20:20.414 "progress": { 00:20:20.414 "blocks": 59392, 00:20:20.414 "percent": 93 00:20:20.414 } 00:20:20.414 }, 00:20:20.414 "base_bdevs_list": [ 00:20:20.414 { 00:20:20.414 "name": "spare", 00:20:20.414 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:20.414 "is_configured": true, 00:20:20.414 "data_offset": 2048, 00:20:20.414 "data_size": 63488 00:20:20.414 }, 00:20:20.414 { 00:20:20.414 "name": "BaseBdev2", 00:20:20.414 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:20.414 "is_configured": true, 00:20:20.414 "data_offset": 2048, 00:20:20.414 "data_size": 63488 00:20:20.414 } 00:20:20.414 ] 00:20:20.414 }' 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.414 03:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # sleep 1 00:20:20.671 [2024-05-15 03:14:51.577909] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:20.672 [2024-05-15 03:14:51.577969] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:20.672 [2024-05-15 03:14:51.578051] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.604 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:21.861 "name": "raid_bdev1", 00:20:21.861 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:21.861 "strip_size_kb": 0, 00:20:21.861 "state": "online", 00:20:21.861 "raid_level": "raid1", 00:20:21.861 "superblock": true, 00:20:21.861 "num_base_bdevs": 2, 00:20:21.861 "num_base_bdevs_discovered": 2, 00:20:21.861 "num_base_bdevs_operational": 2, 00:20:21.861 "base_bdevs_list": [ 00:20:21.861 { 00:20:21.861 "name": "spare", 00:20:21.861 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:21.861 "is_configured": true, 00:20:21.861 "data_offset": 2048, 00:20:21.861 "data_size": 63488 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "name": "BaseBdev2", 00:20:21.861 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:21.861 "is_configured": true, 00:20:21.861 "data_offset": 2048, 00:20:21.861 "data_size": 63488 00:20:21.861 } 00:20:21.861 ] 00:20:21.861 }' 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # break 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:21.861 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:21.862 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:21.862 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:21.862 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.862 03:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:22.119 "name": "raid_bdev1", 00:20:22.119 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:22.119 "strip_size_kb": 0, 00:20:22.119 "state": "online", 00:20:22.119 "raid_level": "raid1", 00:20:22.119 "superblock": true, 00:20:22.119 "num_base_bdevs": 2, 00:20:22.119 "num_base_bdevs_discovered": 2, 00:20:22.119 "num_base_bdevs_operational": 2, 00:20:22.119 "base_bdevs_list": [ 00:20:22.119 { 00:20:22.119 "name": "spare", 00:20:22.119 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:22.119 "is_configured": true, 00:20:22.119 "data_offset": 2048, 00:20:22.119 "data_size": 63488 00:20:22.119 }, 00:20:22.119 { 00:20:22.119 "name": "BaseBdev2", 00:20:22.119 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:22.119 "is_configured": true, 00:20:22.119 "data_offset": 2048, 00:20:22.119 "data_size": 63488 00:20:22.119 } 00:20:22.119 ] 00:20:22.119 }' 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.119 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.377 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.377 "name": "raid_bdev1", 00:20:22.377 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:22.377 "strip_size_kb": 0, 00:20:22.377 "state": "online", 00:20:22.377 "raid_level": "raid1", 00:20:22.377 "superblock": true, 00:20:22.377 "num_base_bdevs": 2, 00:20:22.377 "num_base_bdevs_discovered": 2, 00:20:22.377 "num_base_bdevs_operational": 2, 00:20:22.377 "base_bdevs_list": [ 00:20:22.377 { 00:20:22.377 "name": "spare", 00:20:22.377 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:22.377 "is_configured": true, 00:20:22.377 "data_offset": 2048, 00:20:22.377 "data_size": 63488 00:20:22.377 }, 00:20:22.377 { 00:20:22.377 "name": "BaseBdev2", 00:20:22.377 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:22.377 "is_configured": true, 00:20:22.377 "data_offset": 2048, 00:20:22.377 "data_size": 63488 00:20:22.377 } 00:20:22.377 ] 00:20:22.377 }' 00:20:22.377 03:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.377 03:14:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.941 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:23.198 [2024-05-15 03:14:54.273672] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.198 [2024-05-15 03:14:54.273700] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.198 [2024-05-15 03:14:54.273757] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.198 [2024-05-15 03:14:54.273813] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.198 [2024-05-15 03:14:54.273822] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1100460 name raid_bdev1, state offline 00:20:23.198 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.198 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # jq length 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.455 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:23.712 /dev/nbd0 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.712 1+0 records in 00:20:23.712 1+0 records out 00:20:23.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220157 s, 18.6 MB/s 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.712 03:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:23.969 /dev/nbd1 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:23.969 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.226 1+0 records in 00:20:24.226 1+0 records out 00:20:24.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236409 s, 17.3 MB/s 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.226 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.485 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:20:24.742 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:25.000 03:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:25.000 [2024-05-15 03:14:56.138577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:25.000 [2024-05-15 03:14:56.138625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.000 [2024-05-15 03:14:56.138642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1107110 00:20:25.000 [2024-05-15 03:14:56.138652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.000 [2024-05-15 03:14:56.140364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.000 [2024-05-15 03:14:56.140392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:25.000 [2024-05-15 03:14:56.140459] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:25.000 [2024-05-15 03:14:56.140484] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.000 BaseBdev1 00:20:25.260 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:20:25.260 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:20:25.260 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:25.260 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:25.537 [2024-05-15 03:14:56.635901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:25.537 [2024-05-15 03:14:56.635938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.537 [2024-05-15 03:14:56.635954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ada90 00:20:25.537 [2024-05-15 03:14:56.635963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.537 [2024-05-15 03:14:56.636299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.537 [2024-05-15 03:14:56.636316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:25.537 [2024-05-15 03:14:56.636375] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:25.537 [2024-05-15 03:14:56.636385] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:25.537 [2024-05-15 03:14:56.636398] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.537 [2024-05-15 03:14:56.636410] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ace70 name raid_bdev1, state configuring 00:20:25.537 [2024-05-15 03:14:56.636438] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.537 BaseBdev2 00:20:25.537 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:25.805 03:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:26.063 [2024-05-15 03:14:57.133229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:26.063 [2024-05-15 03:14:57.133266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.063 [2024-05-15 03:14:57.133282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ff690 00:20:26.063 [2024-05-15 03:14:57.133291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.063 [2024-05-15 03:14:57.133653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.063 [2024-05-15 03:14:57.133668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:26.063 [2024-05-15 03:14:57.133742] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:26.063 [2024-05-15 03:14:57.133758] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.063 spare 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.063 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.321 [2024-05-15 03:14:57.234082] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1101870 00:20:26.321 [2024-05-15 03:14:57.234095] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:26.321 [2024-05-15 03:14:57.234298] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1100eb0 00:20:26.321 [2024-05-15 03:14:57.234458] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1101870 00:20:26.321 [2024-05-15 03:14:57.234466] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1101870 00:20:26.321 [2024-05-15 03:14:57.234573] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.321 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.321 "name": "raid_bdev1", 00:20:26.321 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:26.321 "strip_size_kb": 0, 00:20:26.321 "state": "online", 00:20:26.321 "raid_level": "raid1", 00:20:26.321 "superblock": true, 00:20:26.321 "num_base_bdevs": 2, 00:20:26.321 "num_base_bdevs_discovered": 2, 00:20:26.321 "num_base_bdevs_operational": 2, 00:20:26.321 "base_bdevs_list": [ 00:20:26.321 { 00:20:26.321 "name": "spare", 00:20:26.321 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:26.321 "is_configured": true, 00:20:26.321 "data_offset": 2048, 00:20:26.321 "data_size": 63488 00:20:26.321 }, 00:20:26.321 { 00:20:26.321 "name": "BaseBdev2", 00:20:26.321 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:26.321 "is_configured": true, 00:20:26.321 "data_offset": 2048, 00:20:26.321 "data_size": 63488 00:20:26.321 } 00:20:26.321 ] 00:20:26.321 }' 00:20:26.321 03:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.321 03:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.886 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.144 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.144 "name": "raid_bdev1", 00:20:27.144 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:27.144 "strip_size_kb": 0, 00:20:27.144 "state": "online", 00:20:27.144 "raid_level": "raid1", 00:20:27.144 "superblock": true, 00:20:27.144 "num_base_bdevs": 2, 00:20:27.144 "num_base_bdevs_discovered": 2, 00:20:27.144 "num_base_bdevs_operational": 2, 00:20:27.144 "base_bdevs_list": [ 00:20:27.144 { 00:20:27.144 "name": "spare", 00:20:27.144 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:27.144 "is_configured": true, 00:20:27.144 "data_offset": 2048, 00:20:27.144 "data_size": 63488 00:20:27.144 }, 00:20:27.144 { 00:20:27.144 "name": "BaseBdev2", 00:20:27.144 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:27.144 "is_configured": true, 00:20:27.144 "data_offset": 2048, 00:20:27.144 "data_size": 63488 00:20:27.144 } 00:20:27.144 ] 00:20:27.144 }' 00:20:27.144 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.401 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:27.401 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.401 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:27.401 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.401 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:27.658 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.658 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:27.915 [2024-05-15 03:14:58.862005] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.915 03:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.172 03:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.172 "name": "raid_bdev1", 00:20:28.172 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:28.172 "strip_size_kb": 0, 00:20:28.172 "state": "online", 00:20:28.172 "raid_level": "raid1", 00:20:28.172 "superblock": true, 00:20:28.172 "num_base_bdevs": 2, 00:20:28.172 "num_base_bdevs_discovered": 1, 00:20:28.172 "num_base_bdevs_operational": 1, 00:20:28.172 "base_bdevs_list": [ 00:20:28.172 { 00:20:28.172 "name": null, 00:20:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.172 "is_configured": false, 00:20:28.172 "data_offset": 2048, 00:20:28.172 "data_size": 63488 00:20:28.172 }, 00:20:28.172 { 00:20:28.172 "name": "BaseBdev2", 00:20:28.172 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:28.172 "is_configured": true, 00:20:28.172 "data_offset": 2048, 00:20:28.172 "data_size": 63488 00:20:28.172 } 00:20:28.172 ] 00:20:28.172 }' 00:20:28.172 03:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.172 03:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.737 03:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.995 [2024-05-15 03:14:59.956963] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.995 [2024-05-15 03:14:59.957111] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.995 [2024-05-15 03:14:59.957126] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:28.995 [2024-05-15 03:14:59.957153] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.995 [2024-05-15 03:14:59.961821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe101e0 00:20:28.995 [2024-05-15 03:14:59.963326] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.995 03:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # sleep 1 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.927 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.928 03:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.185 "name": "raid_bdev1", 00:20:30.185 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:30.185 "strip_size_kb": 0, 00:20:30.185 "state": "online", 00:20:30.185 "raid_level": "raid1", 00:20:30.185 "superblock": true, 00:20:30.185 "num_base_bdevs": 2, 00:20:30.185 "num_base_bdevs_discovered": 2, 00:20:30.185 "num_base_bdevs_operational": 2, 00:20:30.185 "process": { 00:20:30.185 "type": "rebuild", 00:20:30.185 "target": "spare", 00:20:30.185 "progress": { 00:20:30.185 "blocks": 24576, 00:20:30.185 "percent": 38 00:20:30.185 } 00:20:30.185 }, 00:20:30.185 "base_bdevs_list": [ 00:20:30.185 { 00:20:30.185 "name": "spare", 00:20:30.185 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:30.185 "is_configured": true, 00:20:30.185 "data_offset": 2048, 00:20:30.185 "data_size": 63488 00:20:30.185 }, 00:20:30.185 { 00:20:30.185 "name": "BaseBdev2", 00:20:30.185 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:30.185 "is_configured": true, 00:20:30.185 "data_offset": 2048, 00:20:30.185 "data_size": 63488 00:20:30.185 } 00:20:30.185 ] 00:20:30.185 }' 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.185 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:30.442 [2024-05-15 03:15:01.563601] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.442 [2024-05-15 03:15:01.575580] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.442 [2024-05-15 03:15:01.575621] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.701 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.959 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.959 "name": "raid_bdev1", 00:20:30.959 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:30.959 "strip_size_kb": 0, 00:20:30.959 "state": "online", 00:20:30.959 "raid_level": "raid1", 00:20:30.959 "superblock": true, 00:20:30.959 "num_base_bdevs": 2, 00:20:30.959 "num_base_bdevs_discovered": 1, 00:20:30.959 "num_base_bdevs_operational": 1, 00:20:30.959 "base_bdevs_list": [ 00:20:30.959 { 00:20:30.959 "name": null, 00:20:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.959 "is_configured": false, 00:20:30.959 "data_offset": 2048, 00:20:30.959 "data_size": 63488 00:20:30.959 }, 00:20:30.959 { 00:20:30.959 "name": "BaseBdev2", 00:20:30.959 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:30.959 "is_configured": true, 00:20:30.959 "data_offset": 2048, 00:20:30.959 "data_size": 63488 00:20:30.959 } 00:20:30.959 ] 00:20:30.959 }' 00:20:30.959 03:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.959 03:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.524 03:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:31.781 [2024-05-15 03:15:02.710961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:31.781 [2024-05-15 03:15:02.711011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.781 [2024-05-15 03:15:02.711031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1105950 00:20:31.781 [2024-05-15 03:15:02.711041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.781 [2024-05-15 03:15:02.711426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.781 [2024-05-15 03:15:02.711443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:31.781 [2024-05-15 03:15:02.711522] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:31.781 [2024-05-15 03:15:02.711532] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:31.781 [2024-05-15 03:15:02.711539] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:31.781 [2024-05-15 03:15:02.711561] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.781 [2024-05-15 03:15:02.716350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12add20 00:20:31.781 spare 00:20:31.781 [2024-05-15 03:15:02.717776] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.781 03:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.713 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.970 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.970 "name": "raid_bdev1", 00:20:32.970 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:32.970 "strip_size_kb": 0, 00:20:32.970 "state": "online", 00:20:32.970 "raid_level": "raid1", 00:20:32.970 "superblock": true, 00:20:32.970 "num_base_bdevs": 2, 00:20:32.970 "num_base_bdevs_discovered": 2, 00:20:32.970 "num_base_bdevs_operational": 2, 00:20:32.970 "process": { 00:20:32.970 "type": "rebuild", 00:20:32.970 "target": "spare", 00:20:32.970 "progress": { 00:20:32.970 "blocks": 24576, 00:20:32.970 "percent": 38 00:20:32.970 } 00:20:32.970 }, 00:20:32.970 "base_bdevs_list": [ 00:20:32.970 { 00:20:32.970 "name": "spare", 00:20:32.970 "uuid": "0a59c506-e38c-589c-93b8-22a60bc0833c", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 }, 00:20:32.970 { 00:20:32.970 "name": "BaseBdev2", 00:20:32.970 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 } 00:20:32.970 ] 00:20:32.970 }' 00:20:32.970 03:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.970 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.970 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.970 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.970 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:33.227 [2024-05-15 03:15:04.313118] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.227 [2024-05-15 03:15:04.330026] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:33.227 [2024-05-15 03:15:04.330067] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.227 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.484 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.484 "name": "raid_bdev1", 00:20:33.484 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:33.484 "strip_size_kb": 0, 00:20:33.484 "state": "online", 00:20:33.485 "raid_level": "raid1", 00:20:33.485 "superblock": true, 00:20:33.485 "num_base_bdevs": 2, 00:20:33.485 "num_base_bdevs_discovered": 1, 00:20:33.485 "num_base_bdevs_operational": 1, 00:20:33.485 "base_bdevs_list": [ 00:20:33.485 { 00:20:33.485 "name": null, 00:20:33.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.485 "is_configured": false, 00:20:33.485 "data_offset": 2048, 00:20:33.485 "data_size": 63488 00:20:33.485 }, 00:20:33.485 { 00:20:33.485 "name": "BaseBdev2", 00:20:33.485 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:33.485 "is_configured": true, 00:20:33.485 "data_offset": 2048, 00:20:33.485 "data_size": 63488 00:20:33.485 } 00:20:33.485 ] 00:20:33.485 }' 00:20:33.485 03:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.485 03:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.418 "name": "raid_bdev1", 00:20:34.418 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:34.418 "strip_size_kb": 0, 00:20:34.418 "state": "online", 00:20:34.418 "raid_level": "raid1", 00:20:34.418 "superblock": true, 00:20:34.418 "num_base_bdevs": 2, 00:20:34.418 "num_base_bdevs_discovered": 1, 00:20:34.418 "num_base_bdevs_operational": 1, 00:20:34.418 "base_bdevs_list": [ 00:20:34.418 { 00:20:34.418 "name": null, 00:20:34.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.418 "is_configured": false, 00:20:34.418 "data_offset": 2048, 00:20:34.418 "data_size": 63488 00:20:34.418 }, 00:20:34.418 { 00:20:34.418 "name": "BaseBdev2", 00:20:34.418 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:34.418 "is_configured": true, 00:20:34.418 "data_offset": 2048, 00:20:34.418 "data_size": 63488 00:20:34.418 } 00:20:34.418 ] 00:20:34.418 }' 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:34.418 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.677 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:34.677 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:34.936 03:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:34.936 [2024-05-15 03:15:06.083075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:34.936 [2024-05-15 03:15:06.083125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.936 [2024-05-15 03:15:06.083141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11061c0 00:20:34.936 [2024-05-15 03:15:06.083151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.936 [2024-05-15 03:15:06.083510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.936 [2024-05-15 03:15:06.083526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:34.936 [2024-05-15 03:15:06.083589] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:34.936 [2024-05-15 03:15:06.083605] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:34.936 [2024-05-15 03:15:06.083612] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:34.936 BaseBdev1 00:20:35.194 03:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # sleep 1 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.129 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.388 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.388 "name": "raid_bdev1", 00:20:36.388 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:36.388 "strip_size_kb": 0, 00:20:36.388 "state": "online", 00:20:36.388 "raid_level": "raid1", 00:20:36.388 "superblock": true, 00:20:36.388 "num_base_bdevs": 2, 00:20:36.388 "num_base_bdevs_discovered": 1, 00:20:36.388 "num_base_bdevs_operational": 1, 00:20:36.388 "base_bdevs_list": [ 00:20:36.388 { 00:20:36.388 "name": null, 00:20:36.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.388 "is_configured": false, 00:20:36.388 "data_offset": 2048, 00:20:36.388 "data_size": 63488 00:20:36.388 }, 00:20:36.388 { 00:20:36.388 "name": "BaseBdev2", 00:20:36.388 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:36.388 "is_configured": true, 00:20:36.388 "data_offset": 2048, 00:20:36.388 "data_size": 63488 00:20:36.388 } 00:20:36.388 ] 00:20:36.388 }' 00:20:36.388 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.388 03:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.955 03:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.214 "name": "raid_bdev1", 00:20:37.214 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:37.214 "strip_size_kb": 0, 00:20:37.214 "state": "online", 00:20:37.214 "raid_level": "raid1", 00:20:37.214 "superblock": true, 00:20:37.214 "num_base_bdevs": 2, 00:20:37.214 "num_base_bdevs_discovered": 1, 00:20:37.214 "num_base_bdevs_operational": 1, 00:20:37.214 "base_bdevs_list": [ 00:20:37.214 { 00:20:37.214 "name": null, 00:20:37.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.214 "is_configured": false, 00:20:37.214 "data_offset": 2048, 00:20:37.214 "data_size": 63488 00:20:37.214 }, 00:20:37.214 { 00:20:37.214 "name": "BaseBdev2", 00:20:37.214 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:37.214 "is_configured": true, 00:20:37.214 "data_offset": 2048, 00:20:37.214 "data_size": 63488 00:20:37.214 } 00:20:37.214 ] 00:20:37.214 }' 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:37.214 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:37.473 [2024-05-15 03:15:08.465487] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.473 [2024-05-15 03:15:08.465607] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:37.473 [2024-05-15 03:15:08.465620] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:37.473 request: 00:20:37.473 { 00:20:37.473 "raid_bdev": "raid_bdev1", 00:20:37.473 "base_bdev": "BaseBdev1", 00:20:37.473 "method": "bdev_raid_add_base_bdev", 00:20:37.473 "req_id": 1 00:20:37.473 } 00:20:37.473 Got JSON-RPC error response 00:20:37.473 response: 00:20:37.473 { 00:20:37.473 "code": -22, 00:20:37.473 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:37.473 } 00:20:37.473 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:20:37.473 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.473 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.473 03:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.473 03:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # sleep 1 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.407 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.666 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.666 "name": "raid_bdev1", 00:20:38.666 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:38.666 "strip_size_kb": 0, 00:20:38.666 "state": "online", 00:20:38.666 "raid_level": "raid1", 00:20:38.666 "superblock": true, 00:20:38.666 "num_base_bdevs": 2, 00:20:38.666 "num_base_bdevs_discovered": 1, 00:20:38.666 "num_base_bdevs_operational": 1, 00:20:38.666 "base_bdevs_list": [ 00:20:38.666 { 00:20:38.666 "name": null, 00:20:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.666 "is_configured": false, 00:20:38.666 "data_offset": 2048, 00:20:38.666 "data_size": 63488 00:20:38.666 }, 00:20:38.666 { 00:20:38.666 "name": "BaseBdev2", 00:20:38.666 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:38.666 "is_configured": true, 00:20:38.666 "data_offset": 2048, 00:20:38.666 "data_size": 63488 00:20:38.666 } 00:20:38.666 ] 00:20:38.666 }' 00:20:38.666 03:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.666 03:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.233 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.491 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.491 "name": "raid_bdev1", 00:20:39.491 "uuid": "94c68f2e-bb2e-4fb8-a73b-5958160d9586", 00:20:39.491 "strip_size_kb": 0, 00:20:39.491 "state": "online", 00:20:39.491 "raid_level": "raid1", 00:20:39.491 "superblock": true, 00:20:39.491 "num_base_bdevs": 2, 00:20:39.491 "num_base_bdevs_discovered": 1, 00:20:39.491 "num_base_bdevs_operational": 1, 00:20:39.491 "base_bdevs_list": [ 00:20:39.491 { 00:20:39.491 "name": null, 00:20:39.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.491 "is_configured": false, 00:20:39.492 "data_offset": 2048, 00:20:39.492 "data_size": 63488 00:20:39.492 }, 00:20:39.492 { 00:20:39.492 "name": "BaseBdev2", 00:20:39.492 "uuid": "0bf646e4-68b7-5378-919c-6313cbb1e52e", 00:20:39.492 "is_configured": true, 00:20:39.492 "data_offset": 2048, 00:20:39.492 "data_size": 63488 00:20:39.492 } 00:20:39.492 ] 00:20:39.492 }' 00:20:39.492 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # killprocess 4154619 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4154619 ']' 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 4154619 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4154619 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4154619' 00:20:39.750 killing process with pid 4154619 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 4154619 00:20:39.750 Received shutdown signal, test time was about 60.000000 seconds 00:20:39.750 00:20:39.750 Latency(us) 00:20:39.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.750 =================================================================================================================== 00:20:39.750 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.750 [2024-05-15 03:15:10.749206] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.750 [2024-05-15 03:15:10.749307] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.750 [2024-05-15 03:15:10.749350] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.750 [2024-05-15 03:15:10.749359] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1101870 name raid_bdev1, state offline 00:20:39.750 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 4154619 00:20:39.750 [2024-05-15 03:15:10.774522] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.025 03:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@797 -- # return 0 00:20:40.025 00:20:40.025 real 0m36.319s 00:20:40.025 user 0m55.121s 00:20:40.025 sys 0m5.265s 00:20:40.025 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:40.025 03:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.025 ************************************ 00:20:40.025 END TEST raid_rebuild_test_sb 00:20:40.025 ************************************ 00:20:40.025 03:15:11 bdev_raid -- bdev/bdev_raid.sh@825 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:20:40.025 03:15:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:20:40.025 03:15:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:40.025 03:15:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.025 ************************************ 00:20:40.025 START TEST raid_rebuild_test_io 00:20:40.025 ************************************ 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false true true 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local superblock=false 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local background_io=true 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local verify=true 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:20:40.025 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # local strip_size 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@582 -- # local create_arg 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local data_offset 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # raid_pid=4161563 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # waitforlisten 4161563 /var/tmp/spdk-raid.sock 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 4161563 ']' 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:40.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.026 03:15:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.026 [2024-05-15 03:15:11.136543] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:40.026 [2024-05-15 03:15:11.136597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161563 ] 00:20:40.026 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.026 Zero copy mechanism will not be used. 00:20:40.312 [2024-05-15 03:15:11.234076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.312 [2024-05-15 03:15:11.327655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.312 [2024-05-15 03:15:11.382380] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.312 [2024-05-15 03:15:11.382410] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.247 03:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.247 03:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:20:41.247 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:41.247 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:41.247 BaseBdev1_malloc 00:20:41.247 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.506 [2024-05-15 03:15:12.567056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.506 [2024-05-15 03:15:12.567099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.506 [2024-05-15 03:15:12.567118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x937b00 00:20:41.506 [2024-05-15 03:15:12.567128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.506 [2024-05-15 03:15:12.568830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.506 [2024-05-15 03:15:12.568869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.506 BaseBdev1 00:20:41.506 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:41.506 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:41.765 BaseBdev2_malloc 00:20:41.765 03:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:42.023 [2024-05-15 03:15:13.068974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:42.023 [2024-05-15 03:15:13.069014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.023 [2024-05-15 03:15:13.069031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xadd860 00:20:42.023 [2024-05-15 03:15:13.069040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.023 [2024-05-15 03:15:13.070560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.023 [2024-05-15 03:15:13.070585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:42.023 BaseBdev2 00:20:42.023 03:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:42.282 spare_malloc 00:20:42.282 03:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:42.539 spare_delay 00:20:42.539 03:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:42.797 [2024-05-15 03:15:13.811385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:42.797 [2024-05-15 03:15:13.811425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.797 [2024-05-15 03:15:13.811445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaddf50 00:20:42.797 [2024-05-15 03:15:13.811455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.797 [2024-05-15 03:15:13.813063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.797 [2024-05-15 03:15:13.813089] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:42.797 spare 00:20:42.797 03:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:43.058 [2024-05-15 03:15:14.064077] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.058 [2024-05-15 03:15:14.065423] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.058 [2024-05-15 03:15:14.065503] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x930460 00:20:43.058 [2024-05-15 03:15:14.065512] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:43.058 [2024-05-15 03:15:14.065723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x930140 00:20:43.058 [2024-05-15 03:15:14.065885] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x930460 00:20:43.058 [2024-05-15 03:15:14.065895] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x930460 00:20:43.058 [2024-05-15 03:15:14.066013] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.058 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.316 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.316 "name": "raid_bdev1", 00:20:43.316 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:43.316 "strip_size_kb": 0, 00:20:43.316 "state": "online", 00:20:43.316 "raid_level": "raid1", 00:20:43.316 "superblock": false, 00:20:43.316 "num_base_bdevs": 2, 00:20:43.316 "num_base_bdevs_discovered": 2, 00:20:43.316 "num_base_bdevs_operational": 2, 00:20:43.316 "base_bdevs_list": [ 00:20:43.316 { 00:20:43.316 "name": "BaseBdev1", 00:20:43.316 "uuid": "bb5339cb-ffc0-51e8-b0d8-9877d0c8b048", 00:20:43.316 "is_configured": true, 00:20:43.316 "data_offset": 0, 00:20:43.316 "data_size": 65536 00:20:43.316 }, 00:20:43.316 { 00:20:43.316 "name": "BaseBdev2", 00:20:43.316 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:43.316 "is_configured": true, 00:20:43.316 "data_offset": 0, 00:20:43.316 "data_size": 65536 00:20:43.316 } 00:20:43.316 ] 00:20:43.316 }' 00:20:43.316 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.316 03:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.882 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:43.882 03:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:20:44.140 [2024-05-15 03:15:15.147190] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.140 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=65536 00:20:44.140 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:44.140 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.398 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # data_offset=0 00:20:44.398 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@626 -- # '[' true = true ']' 00:20:44.398 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:44.398 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:44.398 [2024-05-15 03:15:15.534022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x92eb60 00:20:44.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:44.398 Zero copy mechanism will not be used. 00:20:44.398 Running I/O for 60 seconds... 00:20:44.655 [2024-05-15 03:15:15.662532] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:44.655 [2024-05-15 03:15:15.671422] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x92eb60 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.655 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.913 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.913 "name": "raid_bdev1", 00:20:44.913 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:44.913 "strip_size_kb": 0, 00:20:44.913 "state": "online", 00:20:44.913 "raid_level": "raid1", 00:20:44.913 "superblock": false, 00:20:44.913 "num_base_bdevs": 2, 00:20:44.913 "num_base_bdevs_discovered": 1, 00:20:44.913 "num_base_bdevs_operational": 1, 00:20:44.913 "base_bdevs_list": [ 00:20:44.913 { 00:20:44.913 "name": null, 00:20:44.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.913 "is_configured": false, 00:20:44.913 "data_offset": 0, 00:20:44.913 "data_size": 65536 00:20:44.913 }, 00:20:44.913 { 00:20:44.913 "name": "BaseBdev2", 00:20:44.913 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:44.913 "is_configured": true, 00:20:44.913 "data_offset": 0, 00:20:44.913 "data_size": 65536 00:20:44.913 } 00:20:44.913 ] 00:20:44.913 }' 00:20:44.913 03:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.913 03:15:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.479 03:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:45.738 [2024-05-15 03:15:16.849045] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.738 [2024-05-15 03:15:16.894035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x934860 00:20:45.738 03:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # sleep 1 00:20:45.738 [2024-05-15 03:15:16.896198] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.996 [2024-05-15 03:15:17.008202] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:45.996 [2024-05-15 03:15:17.008447] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.255 [2024-05-15 03:15:17.238128] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.255 [2024-05-15 03:15:17.238327] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.514 [2024-05-15 03:15:17.631655] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:46.773 [2024-05-15 03:15:17.752105] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.773 03:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.032 [2024-05-15 03:15:18.102710] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:47.032 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.032 "name": "raid_bdev1", 00:20:47.032 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:47.032 "strip_size_kb": 0, 00:20:47.032 "state": "online", 00:20:47.032 "raid_level": "raid1", 00:20:47.032 "superblock": false, 00:20:47.032 "num_base_bdevs": 2, 00:20:47.032 "num_base_bdevs_discovered": 2, 00:20:47.032 "num_base_bdevs_operational": 2, 00:20:47.032 "process": { 00:20:47.032 "type": "rebuild", 00:20:47.032 "target": "spare", 00:20:47.032 "progress": { 00:20:47.032 "blocks": 14336, 00:20:47.032 "percent": 21 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 "base_bdevs_list": [ 00:20:47.032 { 00:20:47.032 "name": "spare", 00:20:47.032 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:47.032 "is_configured": true, 00:20:47.032 "data_offset": 0, 00:20:47.032 "data_size": 65536 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "name": "BaseBdev2", 00:20:47.032 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:47.032 "is_configured": true, 00:20:47.032 "data_offset": 0, 00:20:47.032 "data_size": 65536 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }' 00:20:47.032 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.291 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.291 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.291 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.291 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:47.291 [2024-05-15 03:15:18.332804] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.291 [2024-05-15 03:15:18.332998] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.291 [2024-05-15 03:15:18.444546] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.291 [2024-05-15 03:15:18.444601] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.550 [2024-05-15 03:15:18.545901] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:47.550 [2024-05-15 03:15:18.547528] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.550 [2024-05-15 03:15:18.579659] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x92eb60 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.550 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.809 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.809 "name": "raid_bdev1", 00:20:47.809 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:47.809 "strip_size_kb": 0, 00:20:47.809 "state": "online", 00:20:47.809 "raid_level": "raid1", 00:20:47.809 "superblock": false, 00:20:47.809 "num_base_bdevs": 2, 00:20:47.809 "num_base_bdevs_discovered": 1, 00:20:47.809 "num_base_bdevs_operational": 1, 00:20:47.809 "base_bdevs_list": [ 00:20:47.809 { 00:20:47.809 "name": null, 00:20:47.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.809 "is_configured": false, 00:20:47.809 "data_offset": 0, 00:20:47.809 "data_size": 65536 00:20:47.809 }, 00:20:47.809 { 00:20:47.809 "name": "BaseBdev2", 00:20:47.809 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:47.809 "is_configured": true, 00:20:47.809 "data_offset": 0, 00:20:47.809 "data_size": 65536 00:20:47.809 } 00:20:47.809 ] 00:20:47.809 }' 00:20:47.809 03:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.809 03:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.375 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.633 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.634 "name": "raid_bdev1", 00:20:48.634 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:48.634 "strip_size_kb": 0, 00:20:48.634 "state": "online", 00:20:48.634 "raid_level": "raid1", 00:20:48.634 "superblock": false, 00:20:48.634 "num_base_bdevs": 2, 00:20:48.634 "num_base_bdevs_discovered": 1, 00:20:48.634 "num_base_bdevs_operational": 1, 00:20:48.634 "base_bdevs_list": [ 00:20:48.634 { 00:20:48.634 "name": null, 00:20:48.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.634 "is_configured": false, 00:20:48.634 "data_offset": 0, 00:20:48.634 "data_size": 65536 00:20:48.634 }, 00:20:48.634 { 00:20:48.634 "name": "BaseBdev2", 00:20:48.634 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:48.634 "is_configured": true, 00:20:48.634 "data_offset": 0, 00:20:48.634 "data_size": 65536 00:20:48.634 } 00:20:48.634 ] 00:20:48.634 }' 00:20:48.634 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.634 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:48.634 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.893 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:48.893 03:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.151 [2024-05-15 03:15:20.053867] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.151 [2024-05-15 03:15:20.090062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97e160 00:20:49.151 [2024-05-15 03:15:20.091580] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.151 03:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # sleep 1 00:20:49.151 [2024-05-15 03:15:20.207123] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:49.410 [2024-05-15 03:15:20.419962] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:49.410 [2024-05-15 03:15:20.420163] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:49.976 [2024-05-15 03:15:20.917743] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:49.976 [2024-05-15 03:15:20.917928] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.976 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.977 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.235 [2024-05-15 03:15:21.187012] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:50.235 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.235 "name": "raid_bdev1", 00:20:50.235 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:50.235 "strip_size_kb": 0, 00:20:50.235 "state": "online", 00:20:50.235 "raid_level": "raid1", 00:20:50.235 "superblock": false, 00:20:50.235 "num_base_bdevs": 2, 00:20:50.235 "num_base_bdevs_discovered": 2, 00:20:50.235 "num_base_bdevs_operational": 2, 00:20:50.235 "process": { 00:20:50.235 "type": "rebuild", 00:20:50.235 "target": "spare", 00:20:50.235 "progress": { 00:20:50.235 "blocks": 14336, 00:20:50.235 "percent": 21 00:20:50.235 } 00:20:50.235 }, 00:20:50.235 "base_bdevs_list": [ 00:20:50.235 { 00:20:50.235 "name": "spare", 00:20:50.235 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:50.235 "is_configured": true, 00:20:50.235 "data_offset": 0, 00:20:50.235 "data_size": 65536 00:20:50.235 }, 00:20:50.235 { 00:20:50.235 "name": "BaseBdev2", 00:20:50.235 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:50.235 "is_configured": true, 00:20:50.235 "data_offset": 0, 00:20:50.235 "data_size": 65536 00:20:50.235 } 00:20:50.235 ] 00:20:50.235 }' 00:20:50.235 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # '[' false = true ']' 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # local timeout=698 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.494 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.494 [2024-05-15 03:15:21.444908] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.754 "name": "raid_bdev1", 00:20:50.754 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:50.754 "strip_size_kb": 0, 00:20:50.754 "state": "online", 00:20:50.754 "raid_level": "raid1", 00:20:50.754 "superblock": false, 00:20:50.754 "num_base_bdevs": 2, 00:20:50.754 "num_base_bdevs_discovered": 2, 00:20:50.754 "num_base_bdevs_operational": 2, 00:20:50.754 "process": { 00:20:50.754 "type": "rebuild", 00:20:50.754 "target": "spare", 00:20:50.754 "progress": { 00:20:50.754 "blocks": 16384, 00:20:50.754 "percent": 25 00:20:50.754 } 00:20:50.754 }, 00:20:50.754 "base_bdevs_list": [ 00:20:50.754 { 00:20:50.754 "name": "spare", 00:20:50.754 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:50.754 "is_configured": true, 00:20:50.754 "data_offset": 0, 00:20:50.754 "data_size": 65536 00:20:50.754 }, 00:20:50.754 { 00:20:50.754 "name": "BaseBdev2", 00:20:50.754 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:50.754 "is_configured": true, 00:20:50.754 "data_offset": 0, 00:20:50.754 "data_size": 65536 00:20:50.754 } 00:20:50.754 ] 00:20:50.754 }' 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.754 03:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:20:51.013 [2024-05-15 03:15:21.952594] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:51.013 [2024-05-15 03:15:21.952768] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:51.271 [2024-05-15 03:15:22.376813] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:51.838 [2024-05-15 03:15:22.728826] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.838 03:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:52.097 "name": "raid_bdev1", 00:20:52.097 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:52.097 "strip_size_kb": 0, 00:20:52.097 "state": "online", 00:20:52.097 "raid_level": "raid1", 00:20:52.097 "superblock": false, 00:20:52.097 "num_base_bdevs": 2, 00:20:52.097 "num_base_bdevs_discovered": 2, 00:20:52.097 "num_base_bdevs_operational": 2, 00:20:52.097 "process": { 00:20:52.097 "type": "rebuild", 00:20:52.097 "target": "spare", 00:20:52.097 "progress": { 00:20:52.097 "blocks": 34816, 00:20:52.097 "percent": 53 00:20:52.097 } 00:20:52.097 }, 00:20:52.097 "base_bdevs_list": [ 00:20:52.097 { 00:20:52.097 "name": "spare", 00:20:52.097 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:52.097 "is_configured": true, 00:20:52.097 "data_offset": 0, 00:20:52.097 "data_size": 65536 00:20:52.097 }, 00:20:52.097 { 00:20:52.097 "name": "BaseBdev2", 00:20:52.097 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:52.097 "is_configured": true, 00:20:52.097 "data_offset": 0, 00:20:52.097 "data_size": 65536 00:20:52.097 } 00:20:52.097 ] 00:20:52.097 }' 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.097 03:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:20:52.097 [2024-05-15 03:15:23.190082] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:53.031 [2024-05-15 03:15:23.863554] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.031 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.290 [2024-05-15 03:15:24.324129] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.290 "name": "raid_bdev1", 00:20:53.290 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:53.290 "strip_size_kb": 0, 00:20:53.290 "state": "online", 00:20:53.290 "raid_level": "raid1", 00:20:53.290 "superblock": false, 00:20:53.290 "num_base_bdevs": 2, 00:20:53.290 "num_base_bdevs_discovered": 2, 00:20:53.290 "num_base_bdevs_operational": 2, 00:20:53.290 "process": { 00:20:53.290 "type": "rebuild", 00:20:53.290 "target": "spare", 00:20:53.290 "progress": { 00:20:53.290 "blocks": 59392, 00:20:53.290 "percent": 90 00:20:53.290 } 00:20:53.290 }, 00:20:53.290 "base_bdevs_list": [ 00:20:53.290 { 00:20:53.290 "name": "spare", 00:20:53.290 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:53.290 "is_configured": true, 00:20:53.290 "data_offset": 0, 00:20:53.290 "data_size": 65536 00:20:53.290 }, 00:20:53.290 { 00:20:53.290 "name": "BaseBdev2", 00:20:53.290 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:53.290 "is_configured": true, 00:20:53.290 "data_offset": 0, 00:20:53.290 "data_size": 65536 00:20:53.290 } 00:20:53.290 ] 00:20:53.290 }' 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.290 03:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:20:53.858 [2024-05-15 03:15:24.776179] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:53.858 [2024-05-15 03:15:24.876474] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:53.858 [2024-05-15 03:15:24.887220] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.454 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.712 "name": "raid_bdev1", 00:20:54.712 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:54.712 "strip_size_kb": 0, 00:20:54.712 "state": "online", 00:20:54.712 "raid_level": "raid1", 00:20:54.712 "superblock": false, 00:20:54.712 "num_base_bdevs": 2, 00:20:54.712 "num_base_bdevs_discovered": 2, 00:20:54.712 "num_base_bdevs_operational": 2, 00:20:54.712 "base_bdevs_list": [ 00:20:54.712 { 00:20:54.712 "name": "spare", 00:20:54.712 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:54.712 "is_configured": true, 00:20:54.712 "data_offset": 0, 00:20:54.712 "data_size": 65536 00:20:54.712 }, 00:20:54.712 { 00:20:54.712 "name": "BaseBdev2", 00:20:54.712 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:54.712 "is_configured": true, 00:20:54.712 "data_offset": 0, 00:20:54.712 "data_size": 65536 00:20:54.712 } 00:20:54.712 ] 00:20:54.712 }' 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # break 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.712 03:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.971 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.971 "name": "raid_bdev1", 00:20:54.971 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:54.971 "strip_size_kb": 0, 00:20:54.971 "state": "online", 00:20:54.971 "raid_level": "raid1", 00:20:54.971 "superblock": false, 00:20:54.971 "num_base_bdevs": 2, 00:20:54.971 "num_base_bdevs_discovered": 2, 00:20:54.971 "num_base_bdevs_operational": 2, 00:20:54.971 "base_bdevs_list": [ 00:20:54.971 { 00:20:54.971 "name": "spare", 00:20:54.971 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:54.971 "is_configured": true, 00:20:54.971 "data_offset": 0, 00:20:54.971 "data_size": 65536 00:20:54.971 }, 00:20:54.971 { 00:20:54.971 "name": "BaseBdev2", 00:20:54.971 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:54.971 "is_configured": true, 00:20:54.971 "data_offset": 0, 00:20:54.971 "data_size": 65536 00:20:54.971 } 00:20:54.971 ] 00:20:54.971 }' 00:20:54.971 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.971 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:54.971 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.230 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.488 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.488 "name": "raid_bdev1", 00:20:55.488 "uuid": "3cf2de7a-a3d5-42b2-ab8a-5659dd6864e4", 00:20:55.488 "strip_size_kb": 0, 00:20:55.488 "state": "online", 00:20:55.488 "raid_level": "raid1", 00:20:55.488 "superblock": false, 00:20:55.488 "num_base_bdevs": 2, 00:20:55.488 "num_base_bdevs_discovered": 2, 00:20:55.488 "num_base_bdevs_operational": 2, 00:20:55.488 "base_bdevs_list": [ 00:20:55.488 { 00:20:55.488 "name": "spare", 00:20:55.488 "uuid": "f9e67df2-e827-5fae-bebb-600dcd0f9821", 00:20:55.488 "is_configured": true, 00:20:55.488 "data_offset": 0, 00:20:55.488 "data_size": 65536 00:20:55.488 }, 00:20:55.488 { 00:20:55.488 "name": "BaseBdev2", 00:20:55.488 "uuid": "d3bedd2a-7b72-57bb-905a-2584e3bbbe05", 00:20:55.488 "is_configured": true, 00:20:55.488 "data_offset": 0, 00:20:55.488 "data_size": 65536 00:20:55.488 } 00:20:55.488 ] 00:20:55.488 }' 00:20:55.488 03:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.488 03:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.054 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:56.313 [2024-05-15 03:15:27.252310] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.313 [2024-05-15 03:15:27.252340] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.313 00:20:56.313 Latency(us) 00:20:56.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.313 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:56.313 raid_bdev1 : 11.75 96.16 288.47 0.00 0.00 13625.84 300.37 119337.94 00:20:56.313 =================================================================================================================== 00:20:56.313 Total : 96.16 288.47 0.00 0.00 13625.84 300.37 119337.94 00:20:56.313 [2024-05-15 03:15:27.320652] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.313 [2024-05-15 03:15:27.320680] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.313 [2024-05-15 03:15:27.320755] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.313 [2024-05-15 03:15:27.320764] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x930460 name raid_bdev1, state offline 00:20:56.313 0 00:20:56.313 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.313 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # jq length 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # '[' true = true ']' 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.571 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:56.829 /dev/nbd0 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.829 1+0 records in 00:20:56.829 1+0 records out 00:20:56.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235136 s, 17.4 MB/s 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:56.829 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev2 ']' 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.830 03:15:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:57.088 /dev/nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.088 1+0 records in 00:20:57.088 1+0 records out 00:20:57.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144143 s, 28.4 MB/s 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@736 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.088 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:57.346 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.603 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@748 -- # '[' false = true ']' 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@795 -- # killprocess 4161563 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 4161563 ']' 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 4161563 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4161563 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4161563' 00:20:57.861 killing process with pid 4161563 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 4161563 00:20:57.861 Received shutdown signal, test time was about 13.272174 seconds 00:20:57.861 00:20:57.861 Latency(us) 00:20:57.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.861 =================================================================================================================== 00:20:57.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.861 [2024-05-15 03:15:28.840834] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.861 03:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 4161563 00:20:57.861 [2024-05-15 03:15:28.861276] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@797 -- # return 0 00:20:58.119 00:20:58.119 real 0m18.019s 00:20:58.119 user 0m28.175s 00:20:58.119 sys 0m2.147s 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 ************************************ 00:20:58.119 END TEST raid_rebuild_test_io 00:20:58.119 ************************************ 00:20:58.119 03:15:29 bdev_raid -- bdev/bdev_raid.sh@826 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:20:58.119 03:15:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:20:58.119 03:15:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:58.119 03:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 ************************************ 00:20:58.119 START TEST raid_rebuild_test_sb_io 00:20:58.119 ************************************ 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true true true 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local background_io=true 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local verify=true 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # local strip_size 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@582 -- # local create_arg 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local data_offset 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # raid_pid=4164642 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # waitforlisten 4164642 /var/tmp/spdk-raid.sock 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 4164642 ']' 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:58.119 03:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 [2024-05-15 03:15:29.229130] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:58.119 [2024-05-15 03:15:29.229187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164642 ] 00:20:58.119 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:58.119 Zero copy mechanism will not be used. 00:20:58.377 [2024-05-15 03:15:29.327653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.377 [2024-05-15 03:15:29.419791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.377 [2024-05-15 03:15:29.485352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.377 [2024-05-15 03:15:29.485386] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.310 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:59.310 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:20:59.310 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:59.310 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:59.310 BaseBdev1_malloc 00:20:59.310 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:59.583 [2024-05-15 03:15:30.594728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:59.583 [2024-05-15 03:15:30.594775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.583 [2024-05-15 03:15:30.594795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d9b00 00:20:59.583 [2024-05-15 03:15:30.594805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.583 [2024-05-15 03:15:30.596559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.583 [2024-05-15 03:15:30.596597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:59.583 BaseBdev1 00:20:59.583 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:20:59.583 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:59.840 BaseBdev2_malloc 00:20:59.840 03:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:00.098 [2024-05-15 03:15:31.100737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:00.098 [2024-05-15 03:15:31.100778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.098 [2024-05-15 03:15:31.100795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197f860 00:21:00.098 [2024-05-15 03:15:31.100805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.098 [2024-05-15 03:15:31.102266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.098 [2024-05-15 03:15:31.102292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:00.098 BaseBdev2 00:21:00.098 03:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:00.355 spare_malloc 00:21:00.355 03:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:00.614 spare_delay 00:21:00.614 03:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:00.871 [2024-05-15 03:15:31.871121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.871 [2024-05-15 03:15:31.871155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.871 [2024-05-15 03:15:31.871172] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197ff50 00:21:00.871 [2024-05-15 03:15:31.871182] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.871 [2024-05-15 03:15:31.872641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.871 [2024-05-15 03:15:31.872666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.871 spare 00:21:00.871 03:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:01.130 [2024-05-15 03:15:32.127834] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.130 [2024-05-15 03:15:32.129112] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.130 [2024-05-15 03:15:32.129272] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x17d2460 00:21:01.130 [2024-05-15 03:15:32.129284] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:01.130 [2024-05-15 03:15:32.129470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x197e8e0 00:21:01.130 [2024-05-15 03:15:32.129616] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17d2460 00:21:01.130 [2024-05-15 03:15:32.129624] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17d2460 00:21:01.130 [2024-05-15 03:15:32.129719] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.130 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.388 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.388 "name": "raid_bdev1", 00:21:01.388 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:01.388 "strip_size_kb": 0, 00:21:01.388 "state": "online", 00:21:01.388 "raid_level": "raid1", 00:21:01.388 "superblock": true, 00:21:01.388 "num_base_bdevs": 2, 00:21:01.388 "num_base_bdevs_discovered": 2, 00:21:01.388 "num_base_bdevs_operational": 2, 00:21:01.388 "base_bdevs_list": [ 00:21:01.388 { 00:21:01.388 "name": "BaseBdev1", 00:21:01.388 "uuid": "1f01546a-8c6a-537e-bb39-d60249878ad4", 00:21:01.388 "is_configured": true, 00:21:01.388 "data_offset": 2048, 00:21:01.388 "data_size": 63488 00:21:01.388 }, 00:21:01.388 { 00:21:01.388 "name": "BaseBdev2", 00:21:01.388 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:01.388 "is_configured": true, 00:21:01.388 "data_offset": 2048, 00:21:01.388 "data_size": 63488 00:21:01.388 } 00:21:01.388 ] 00:21:01.388 }' 00:21:01.388 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.388 03:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.954 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:01.954 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:21:02.211 [2024-05-15 03:15:33.267096] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.211 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=63488 00:21:02.211 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.211 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:02.470 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # data_offset=2048 00:21:02.470 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@626 -- # '[' true = true ']' 00:21:02.470 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:02.470 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:02.728 [2024-05-15 03:15:33.661960] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x197e6d0 00:21:02.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:02.728 Zero copy mechanism will not be used. 00:21:02.728 Running I/O for 60 seconds... 00:21:02.728 [2024-05-15 03:15:33.778311] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:02.728 [2024-05-15 03:15:33.787207] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x197e6d0 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.728 03:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.986 03:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.986 "name": "raid_bdev1", 00:21:02.986 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:02.986 "strip_size_kb": 0, 00:21:02.986 "state": "online", 00:21:02.986 "raid_level": "raid1", 00:21:02.986 "superblock": true, 00:21:02.986 "num_base_bdevs": 2, 00:21:02.986 "num_base_bdevs_discovered": 1, 00:21:02.986 "num_base_bdevs_operational": 1, 00:21:02.986 "base_bdevs_list": [ 00:21:02.986 { 00:21:02.986 "name": null, 00:21:02.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.986 "is_configured": false, 00:21:02.986 "data_offset": 2048, 00:21:02.986 "data_size": 63488 00:21:02.986 }, 00:21:02.986 { 00:21:02.986 "name": "BaseBdev2", 00:21:02.986 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:02.986 "is_configured": true, 00:21:02.986 "data_offset": 2048, 00:21:02.986 "data_size": 63488 00:21:02.986 } 00:21:02.986 ] 00:21:02.986 }' 00:21:02.986 03:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.986 03:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.919 03:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:03.919 [2024-05-15 03:15:34.980957] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.919 03:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # sleep 1 00:21:03.919 [2024-05-15 03:15:35.034452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17d1a00 00:21:03.919 [2024-05-15 03:15:35.036644] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.176 [2024-05-15 03:15:35.155727] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:04.176 [2024-05-15 03:15:35.156124] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:04.434 [2024-05-15 03:15:35.379632] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.434 [2024-05-15 03:15:35.379779] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.691 [2024-05-15 03:15:35.692685] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:04.691 [2024-05-15 03:15:35.692948] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:04.691 [2024-05-15 03:15:35.814105] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.949 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.949 [2024-05-15 03:15:36.091660] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:04.949 [2024-05-15 03:15:36.091995] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:05.207 [2024-05-15 03:15:36.194990] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:05.207 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.207 "name": "raid_bdev1", 00:21:05.207 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:05.207 "strip_size_kb": 0, 00:21:05.207 "state": "online", 00:21:05.207 "raid_level": "raid1", 00:21:05.207 "superblock": true, 00:21:05.207 "num_base_bdevs": 2, 00:21:05.207 "num_base_bdevs_discovered": 2, 00:21:05.207 "num_base_bdevs_operational": 2, 00:21:05.207 "process": { 00:21:05.207 "type": "rebuild", 00:21:05.207 "target": "spare", 00:21:05.207 "progress": { 00:21:05.207 "blocks": 16384, 00:21:05.207 "percent": 25 00:21:05.207 } 00:21:05.207 }, 00:21:05.207 "base_bdevs_list": [ 00:21:05.207 { 00:21:05.207 "name": "spare", 00:21:05.207 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:05.207 "is_configured": true, 00:21:05.207 "data_offset": 2048, 00:21:05.207 "data_size": 63488 00:21:05.207 }, 00:21:05.207 { 00:21:05.207 "name": "BaseBdev2", 00:21:05.207 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:05.207 "is_configured": true, 00:21:05.207 "data_offset": 2048, 00:21:05.207 "data_size": 63488 00:21:05.207 } 00:21:05.207 ] 00:21:05.207 }' 00:21:05.207 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.207 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.207 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.464 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.464 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:05.464 [2024-05-15 03:15:36.421383] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:05.722 [2024-05-15 03:15:36.624598] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.722 [2024-05-15 03:15:36.752081] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:05.722 [2024-05-15 03:15:36.771396] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.722 [2024-05-15 03:15:36.811962] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x197e6d0 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.722 03:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.980 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.980 "name": "raid_bdev1", 00:21:05.980 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:05.980 "strip_size_kb": 0, 00:21:05.980 "state": "online", 00:21:05.980 "raid_level": "raid1", 00:21:05.980 "superblock": true, 00:21:05.980 "num_base_bdevs": 2, 00:21:05.980 "num_base_bdevs_discovered": 1, 00:21:05.980 "num_base_bdevs_operational": 1, 00:21:05.980 "base_bdevs_list": [ 00:21:05.980 { 00:21:05.980 "name": null, 00:21:05.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.980 "is_configured": false, 00:21:05.980 "data_offset": 2048, 00:21:05.980 "data_size": 63488 00:21:05.980 }, 00:21:05.980 { 00:21:05.980 "name": "BaseBdev2", 00:21:05.980 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:05.980 "is_configured": true, 00:21:05.980 "data_offset": 2048, 00:21:05.980 "data_size": 63488 00:21:05.980 } 00:21:05.980 ] 00:21:05.980 }' 00:21:05.980 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.980 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.912 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.912 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.912 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:06.912 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:06.912 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.913 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.913 03:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.913 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.913 "name": "raid_bdev1", 00:21:06.913 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:06.913 "strip_size_kb": 0, 00:21:06.913 "state": "online", 00:21:06.913 "raid_level": "raid1", 00:21:06.913 "superblock": true, 00:21:06.913 "num_base_bdevs": 2, 00:21:06.913 "num_base_bdevs_discovered": 1, 00:21:06.913 "num_base_bdevs_operational": 1, 00:21:06.913 "base_bdevs_list": [ 00:21:06.913 { 00:21:06.913 "name": null, 00:21:06.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.913 "is_configured": false, 00:21:06.913 "data_offset": 2048, 00:21:06.913 "data_size": 63488 00:21:06.913 }, 00:21:06.913 { 00:21:06.913 "name": "BaseBdev2", 00:21:06.913 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:06.913 "is_configured": true, 00:21:06.913 "data_offset": 2048, 00:21:06.913 "data_size": 63488 00:21:06.913 } 00:21:06.913 ] 00:21:06.913 }' 00:21:06.913 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.913 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:07.170 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.170 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:07.170 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:07.428 [2024-05-15 03:15:38.356162] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.428 [2024-05-15 03:15:38.400480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1870d60 00:21:07.428 03:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # sleep 1 00:21:07.428 [2024-05-15 03:15:38.402014] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.428 [2024-05-15 03:15:38.503268] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:07.428 [2024-05-15 03:15:38.503492] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:07.686 [2024-05-15 03:15:38.624023] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:07.686 [2024-05-15 03:15:38.624173] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:07.943 [2024-05-15 03:15:38.895749] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:07.943 [2024-05-15 03:15:38.896048] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:07.943 [2024-05-15 03:15:39.017006] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:08.200 [2024-05-15 03:15:39.248521] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.468 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.468 [2024-05-15 03:15:39.478049] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:08.762 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.762 "name": "raid_bdev1", 00:21:08.762 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:08.762 "strip_size_kb": 0, 00:21:08.762 "state": "online", 00:21:08.762 "raid_level": "raid1", 00:21:08.762 "superblock": true, 00:21:08.762 "num_base_bdevs": 2, 00:21:08.762 "num_base_bdevs_discovered": 2, 00:21:08.762 "num_base_bdevs_operational": 2, 00:21:08.762 "process": { 00:21:08.762 "type": "rebuild", 00:21:08.762 "target": "spare", 00:21:08.762 "progress": { 00:21:08.762 "blocks": 16384, 00:21:08.762 "percent": 25 00:21:08.762 } 00:21:08.762 }, 00:21:08.762 "base_bdevs_list": [ 00:21:08.762 { 00:21:08.762 "name": "spare", 00:21:08.762 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:08.762 "is_configured": true, 00:21:08.762 "data_offset": 2048, 00:21:08.762 "data_size": 63488 00:21:08.762 }, 00:21:08.762 { 00:21:08.762 "name": "BaseBdev2", 00:21:08.762 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:08.762 "is_configured": true, 00:21:08.762 "data_offset": 2048, 00:21:08.762 "data_size": 63488 00:21:08.762 } 00:21:08.762 ] 00:21:08.763 }' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:21:08.763 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # local timeout=716 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.763 03:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.763 [2024-05-15 03:15:39.819378] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:08.763 [2024-05-15 03:15:39.819687] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.035 "name": "raid_bdev1", 00:21:09.035 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:09.035 "strip_size_kb": 0, 00:21:09.035 "state": "online", 00:21:09.035 "raid_level": "raid1", 00:21:09.035 "superblock": true, 00:21:09.035 "num_base_bdevs": 2, 00:21:09.035 "num_base_bdevs_discovered": 2, 00:21:09.035 "num_base_bdevs_operational": 2, 00:21:09.035 "process": { 00:21:09.035 "type": "rebuild", 00:21:09.035 "target": "spare", 00:21:09.035 "progress": { 00:21:09.035 "blocks": 20480, 00:21:09.035 "percent": 32 00:21:09.035 } 00:21:09.035 }, 00:21:09.035 "base_bdevs_list": [ 00:21:09.035 { 00:21:09.035 "name": "spare", 00:21:09.035 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:09.035 "is_configured": true, 00:21:09.035 "data_offset": 2048, 00:21:09.035 "data_size": 63488 00:21:09.035 }, 00:21:09.035 { 00:21:09.035 "name": "BaseBdev2", 00:21:09.035 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:09.035 "is_configured": true, 00:21:09.035 "data_offset": 2048, 00:21:09.035 "data_size": 63488 00:21:09.035 } 00:21:09.035 ] 00:21:09.035 }' 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.035 [2024-05-15 03:15:40.021384] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.035 03:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:21:09.293 [2024-05-15 03:15:40.279770] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:09.550 [2024-05-15 03:15:40.497254] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:09.808 [2024-05-15 03:15:40.910767] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.065 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.322 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.322 "name": "raid_bdev1", 00:21:10.322 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:10.322 "strip_size_kb": 0, 00:21:10.322 "state": "online", 00:21:10.322 "raid_level": "raid1", 00:21:10.323 "superblock": true, 00:21:10.323 "num_base_bdevs": 2, 00:21:10.323 "num_base_bdevs_discovered": 2, 00:21:10.323 "num_base_bdevs_operational": 2, 00:21:10.323 "process": { 00:21:10.323 "type": "rebuild", 00:21:10.323 "target": "spare", 00:21:10.323 "progress": { 00:21:10.323 "blocks": 43008, 00:21:10.323 "percent": 67 00:21:10.323 } 00:21:10.323 }, 00:21:10.323 "base_bdevs_list": [ 00:21:10.323 { 00:21:10.323 "name": "spare", 00:21:10.323 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:10.323 "is_configured": true, 00:21:10.323 "data_offset": 2048, 00:21:10.323 "data_size": 63488 00:21:10.323 }, 00:21:10.323 { 00:21:10.323 "name": "BaseBdev2", 00:21:10.323 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:10.323 "is_configured": true, 00:21:10.323 "data_offset": 2048, 00:21:10.323 "data_size": 63488 00:21:10.323 } 00:21:10.323 ] 00:21:10.323 }' 00:21:10.323 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.323 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.323 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.323 [2024-05-15 03:15:41.453426] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:10.323 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.323 03:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:21:10.887 [2024-05-15 03:15:41.885384] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:11.452 [2024-05-15 03:15:42.336081] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.452 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.709 [2024-05-15 03:15:42.667303] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.709 "name": "raid_bdev1", 00:21:11.709 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:11.709 "strip_size_kb": 0, 00:21:11.709 "state": "online", 00:21:11.709 "raid_level": "raid1", 00:21:11.709 "superblock": true, 00:21:11.709 "num_base_bdevs": 2, 00:21:11.709 "num_base_bdevs_discovered": 2, 00:21:11.709 "num_base_bdevs_operational": 2, 00:21:11.709 "process": { 00:21:11.709 "type": "rebuild", 00:21:11.709 "target": "spare", 00:21:11.709 "progress": { 00:21:11.709 "blocks": 63488, 00:21:11.709 "percent": 100 00:21:11.709 } 00:21:11.709 }, 00:21:11.709 "base_bdevs_list": [ 00:21:11.709 { 00:21:11.709 "name": "spare", 00:21:11.709 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:11.709 "is_configured": true, 00:21:11.709 "data_offset": 2048, 00:21:11.709 "data_size": 63488 00:21:11.709 }, 00:21:11.709 { 00:21:11.709 "name": "BaseBdev2", 00:21:11.709 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:11.709 "is_configured": true, 00:21:11.709 "data_offset": 2048, 00:21:11.709 "data_size": 63488 00:21:11.709 } 00:21:11.709 ] 00:21:11.709 }' 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.709 [2024-05-15 03:15:42.776307] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:11.709 [2024-05-15 03:15:42.778353] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.709 03:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.082 03:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.082 "name": "raid_bdev1", 00:21:13.082 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:13.082 "strip_size_kb": 0, 00:21:13.082 "state": "online", 00:21:13.082 "raid_level": "raid1", 00:21:13.082 "superblock": true, 00:21:13.082 "num_base_bdevs": 2, 00:21:13.082 "num_base_bdevs_discovered": 2, 00:21:13.082 "num_base_bdevs_operational": 2, 00:21:13.082 "base_bdevs_list": [ 00:21:13.082 { 00:21:13.082 "name": "spare", 00:21:13.082 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:13.082 "is_configured": true, 00:21:13.082 "data_offset": 2048, 00:21:13.082 "data_size": 63488 00:21:13.082 }, 00:21:13.082 { 00:21:13.082 "name": "BaseBdev2", 00:21:13.082 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:13.082 "is_configured": true, 00:21:13.082 "data_offset": 2048, 00:21:13.082 "data_size": 63488 00:21:13.082 } 00:21:13.082 ] 00:21:13.082 }' 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # break 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.082 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.340 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.340 "name": "raid_bdev1", 00:21:13.340 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:13.340 "strip_size_kb": 0, 00:21:13.340 "state": "online", 00:21:13.340 "raid_level": "raid1", 00:21:13.340 "superblock": true, 00:21:13.340 "num_base_bdevs": 2, 00:21:13.340 "num_base_bdevs_discovered": 2, 00:21:13.340 "num_base_bdevs_operational": 2, 00:21:13.340 "base_bdevs_list": [ 00:21:13.340 { 00:21:13.340 "name": "spare", 00:21:13.340 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:13.340 "is_configured": true, 00:21:13.340 "data_offset": 2048, 00:21:13.341 "data_size": 63488 00:21:13.341 }, 00:21:13.341 { 00:21:13.341 "name": "BaseBdev2", 00:21:13.341 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:13.341 "is_configured": true, 00:21:13.341 "data_offset": 2048, 00:21:13.341 "data_size": 63488 00:21:13.341 } 00:21:13.341 ] 00:21:13.341 }' 00:21:13.341 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.599 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.857 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.857 "name": "raid_bdev1", 00:21:13.857 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:13.857 "strip_size_kb": 0, 00:21:13.857 "state": "online", 00:21:13.857 "raid_level": "raid1", 00:21:13.857 "superblock": true, 00:21:13.857 "num_base_bdevs": 2, 00:21:13.857 "num_base_bdevs_discovered": 2, 00:21:13.857 "num_base_bdevs_operational": 2, 00:21:13.857 "base_bdevs_list": [ 00:21:13.857 { 00:21:13.857 "name": "spare", 00:21:13.857 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:13.857 "is_configured": true, 00:21:13.857 "data_offset": 2048, 00:21:13.857 "data_size": 63488 00:21:13.857 }, 00:21:13.857 { 00:21:13.857 "name": "BaseBdev2", 00:21:13.857 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:13.857 "is_configured": true, 00:21:13.857 "data_offset": 2048, 00:21:13.857 "data_size": 63488 00:21:13.857 } 00:21:13.857 ] 00:21:13.857 }' 00:21:13.857 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.857 03:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 03:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.681 [2024-05-15 03:15:45.672935] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.681 [2024-05-15 03:15:45.672966] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.681 00:21:14.681 Latency(us) 00:21:14.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.681 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:14.681 raid_bdev1 : 12.07 94.05 282.16 0.00 0.00 14742.70 300.37 111348.78 00:21:14.681 =================================================================================================================== 00:21:14.681 Total : 94.05 282.16 0.00 0.00 14742.70 300.37 111348.78 00:21:14.681 [2024-05-15 03:15:45.765356] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.681 [2024-05-15 03:15:45.765383] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.681 [2024-05-15 03:15:45.765460] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.681 [2024-05-15 03:15:45.765469] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17d2460 name raid_bdev1, state offline 00:21:14.681 0 00:21:14.681 03:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.681 03:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # jq length 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # '[' true = true ']' 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:14.940 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:15.198 /dev/nbd0 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.198 1+0 records in 00:21:15.198 1+0 records out 00:21:15.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240057 s, 17.1 MB/s 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev2 ']' 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.198 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:15.456 /dev/nbd1 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.456 1+0 records in 00:21:15.456 1+0 records out 00:21:15.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228536 s, 17.9 MB/s 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.456 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@736 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.714 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.972 03:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:21:16.230 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:16.489 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:16.748 [2024-05-15 03:15:47.718527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:16.748 [2024-05-15 03:15:47.718573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.748 [2024-05-15 03:15:47.718591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18704c0 00:21:16.748 [2024-05-15 03:15:47.718602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.748 [2024-05-15 03:15:47.720305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.748 [2024-05-15 03:15:47.720333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:16.748 [2024-05-15 03:15:47.720403] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:16.748 [2024-05-15 03:15:47.720428] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.748 BaseBdev1 00:21:16.748 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:21:16.748 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:21:16.748 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:17.006 03:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:17.264 [2024-05-15 03:15:48.219993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:17.264 [2024-05-15 03:15:48.220033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.264 [2024-05-15 03:15:48.220050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197fa90 00:21:17.264 [2024-05-15 03:15:48.220059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.264 [2024-05-15 03:15:48.220403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.264 [2024-05-15 03:15:48.220419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:17.264 [2024-05-15 03:15:48.220479] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:17.264 [2024-05-15 03:15:48.220489] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:17.264 [2024-05-15 03:15:48.220502] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.264 [2024-05-15 03:15:48.220515] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x197ebb0 name raid_bdev1, state configuring 00:21:17.264 [2024-05-15 03:15:48.220542] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.264 BaseBdev2 00:21:17.264 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:17.522 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:17.781 [2024-05-15 03:15:48.725408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:17.781 [2024-05-15 03:15:48.725446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.781 [2024-05-15 03:15:48.725466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d9ed0 00:21:17.781 [2024-05-15 03:15:48.725476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.781 [2024-05-15 03:15:48.725860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.781 [2024-05-15 03:15:48.725877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:17.781 [2024-05-15 03:15:48.725955] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:17.781 [2024-05-15 03:15:48.725972] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.781 spare 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.781 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.781 [2024-05-15 03:15:48.826300] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x17d3e40 00:21:17.781 [2024-05-15 03:15:48.826316] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.781 [2024-05-15 03:15:48.826521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17d0740 00:21:17.781 [2024-05-15 03:15:48.826677] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17d3e40 00:21:17.781 [2024-05-15 03:15:48.826686] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17d3e40 00:21:17.781 [2024-05-15 03:15:48.826800] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.039 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.039 "name": "raid_bdev1", 00:21:18.039 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:18.039 "strip_size_kb": 0, 00:21:18.039 "state": "online", 00:21:18.039 "raid_level": "raid1", 00:21:18.039 "superblock": true, 00:21:18.039 "num_base_bdevs": 2, 00:21:18.039 "num_base_bdevs_discovered": 2, 00:21:18.039 "num_base_bdevs_operational": 2, 00:21:18.039 "base_bdevs_list": [ 00:21:18.039 { 00:21:18.039 "name": "spare", 00:21:18.039 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:18.039 "is_configured": true, 00:21:18.039 "data_offset": 2048, 00:21:18.039 "data_size": 63488 00:21:18.039 }, 00:21:18.039 { 00:21:18.039 "name": "BaseBdev2", 00:21:18.039 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:18.039 "is_configured": true, 00:21:18.039 "data_offset": 2048, 00:21:18.039 "data_size": 63488 00:21:18.039 } 00:21:18.039 ] 00:21:18.039 }' 00:21:18.039 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.039 03:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.604 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.862 "name": "raid_bdev1", 00:21:18.862 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:18.862 "strip_size_kb": 0, 00:21:18.862 "state": "online", 00:21:18.862 "raid_level": "raid1", 00:21:18.862 "superblock": true, 00:21:18.862 "num_base_bdevs": 2, 00:21:18.862 "num_base_bdevs_discovered": 2, 00:21:18.862 "num_base_bdevs_operational": 2, 00:21:18.862 "base_bdevs_list": [ 00:21:18.862 { 00:21:18.862 "name": "spare", 00:21:18.862 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:18.862 "is_configured": true, 00:21:18.862 "data_offset": 2048, 00:21:18.862 "data_size": 63488 00:21:18.862 }, 00:21:18.862 { 00:21:18.862 "name": "BaseBdev2", 00:21:18.862 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:18.862 "is_configured": true, 00:21:18.862 "data_offset": 2048, 00:21:18.862 "data_size": 63488 00:21:18.862 } 00:21:18.862 ] 00:21:18.862 }' 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.862 03:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:19.120 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.120 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:19.379 [2024-05-15 03:15:50.374184] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.379 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.637 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.637 "name": "raid_bdev1", 00:21:19.637 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:19.637 "strip_size_kb": 0, 00:21:19.637 "state": "online", 00:21:19.637 "raid_level": "raid1", 00:21:19.637 "superblock": true, 00:21:19.637 "num_base_bdevs": 2, 00:21:19.637 "num_base_bdevs_discovered": 1, 00:21:19.637 "num_base_bdevs_operational": 1, 00:21:19.637 "base_bdevs_list": [ 00:21:19.637 { 00:21:19.637 "name": null, 00:21:19.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.637 "is_configured": false, 00:21:19.637 "data_offset": 2048, 00:21:19.637 "data_size": 63488 00:21:19.637 }, 00:21:19.637 { 00:21:19.637 "name": "BaseBdev2", 00:21:19.637 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:19.637 "is_configured": true, 00:21:19.637 "data_offset": 2048, 00:21:19.637 "data_size": 63488 00:21:19.637 } 00:21:19.637 ] 00:21:19.637 }' 00:21:19.637 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.637 03:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.203 03:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.461 [2024-05-15 03:15:51.417168] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.461 [2024-05-15 03:15:51.417324] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.461 [2024-05-15 03:15:51.417338] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.461 [2024-05-15 03:15:51.417363] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.461 [2024-05-15 03:15:51.422466] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1870750 00:21:20.462 [2024-05-15 03:15:51.424571] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.462 03:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # sleep 1 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.397 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.656 "name": "raid_bdev1", 00:21:21.656 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:21.656 "strip_size_kb": 0, 00:21:21.656 "state": "online", 00:21:21.656 "raid_level": "raid1", 00:21:21.656 "superblock": true, 00:21:21.656 "num_base_bdevs": 2, 00:21:21.656 "num_base_bdevs_discovered": 2, 00:21:21.656 "num_base_bdevs_operational": 2, 00:21:21.656 "process": { 00:21:21.656 "type": "rebuild", 00:21:21.656 "target": "spare", 00:21:21.656 "progress": { 00:21:21.656 "blocks": 22528, 00:21:21.656 "percent": 35 00:21:21.656 } 00:21:21.656 }, 00:21:21.656 "base_bdevs_list": [ 00:21:21.656 { 00:21:21.656 "name": "spare", 00:21:21.656 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:21.656 "is_configured": true, 00:21:21.656 "data_offset": 2048, 00:21:21.656 "data_size": 63488 00:21:21.656 }, 00:21:21.656 { 00:21:21.656 "name": "BaseBdev2", 00:21:21.656 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:21.656 "is_configured": true, 00:21:21.656 "data_offset": 2048, 00:21:21.656 "data_size": 63488 00:21:21.656 } 00:21:21.656 ] 00:21:21.656 }' 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.656 03:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:21.915 [2024-05-15 03:15:52.949163] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.915 [2024-05-15 03:15:53.037372] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.915 [2024-05-15 03:15:53.037414] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.915 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.174 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.174 "name": "raid_bdev1", 00:21:22.174 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:22.174 "strip_size_kb": 0, 00:21:22.174 "state": "online", 00:21:22.174 "raid_level": "raid1", 00:21:22.174 "superblock": true, 00:21:22.174 "num_base_bdevs": 2, 00:21:22.174 "num_base_bdevs_discovered": 1, 00:21:22.174 "num_base_bdevs_operational": 1, 00:21:22.174 "base_bdevs_list": [ 00:21:22.174 { 00:21:22.174 "name": null, 00:21:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.174 "is_configured": false, 00:21:22.174 "data_offset": 2048, 00:21:22.174 "data_size": 63488 00:21:22.174 }, 00:21:22.174 { 00:21:22.174 "name": "BaseBdev2", 00:21:22.174 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:22.174 "is_configured": true, 00:21:22.174 "data_offset": 2048, 00:21:22.174 "data_size": 63488 00:21:22.174 } 00:21:22.174 ] 00:21:22.174 }' 00:21:22.174 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.174 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.845 03:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:23.104 [2024-05-15 03:15:54.173203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:23.104 [2024-05-15 03:15:54.173250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.104 [2024-05-15 03:15:54.173271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d4d20 00:21:23.104 [2024-05-15 03:15:54.173281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.104 [2024-05-15 03:15:54.173671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.104 [2024-05-15 03:15:54.173688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:23.104 [2024-05-15 03:15:54.173766] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:23.104 [2024-05-15 03:15:54.173776] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:23.104 [2024-05-15 03:15:54.173790] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:23.104 [2024-05-15 03:15:54.173807] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.104 [2024-05-15 03:15:54.178904] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1870750 00:21:23.104 spare 00:21:23.104 [2024-05-15 03:15:54.180427] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.104 03:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:24.482 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.483 "name": "raid_bdev1", 00:21:24.483 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:24.483 "strip_size_kb": 0, 00:21:24.483 "state": "online", 00:21:24.483 "raid_level": "raid1", 00:21:24.483 "superblock": true, 00:21:24.483 "num_base_bdevs": 2, 00:21:24.483 "num_base_bdevs_discovered": 2, 00:21:24.483 "num_base_bdevs_operational": 2, 00:21:24.483 "process": { 00:21:24.483 "type": "rebuild", 00:21:24.483 "target": "spare", 00:21:24.483 "progress": { 00:21:24.483 "blocks": 24576, 00:21:24.483 "percent": 38 00:21:24.483 } 00:21:24.483 }, 00:21:24.483 "base_bdevs_list": [ 00:21:24.483 { 00:21:24.483 "name": "spare", 00:21:24.483 "uuid": "03871c2b-f878-5d79-92d8-d67df7f9f3b4", 00:21:24.483 "is_configured": true, 00:21:24.483 "data_offset": 2048, 00:21:24.483 "data_size": 63488 00:21:24.483 }, 00:21:24.483 { 00:21:24.483 "name": "BaseBdev2", 00:21:24.483 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:24.483 "is_configured": true, 00:21:24.483 "data_offset": 2048, 00:21:24.483 "data_size": 63488 00:21:24.483 } 00:21:24.483 ] 00:21:24.483 }' 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.483 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:24.740 [2024-05-15 03:15:55.793296] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.740 [2024-05-15 03:15:55.893603] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:24.740 [2024-05-15 03:15:55.893651] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.998 03:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.256 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.256 "name": "raid_bdev1", 00:21:25.256 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:25.256 "strip_size_kb": 0, 00:21:25.256 "state": "online", 00:21:25.256 "raid_level": "raid1", 00:21:25.256 "superblock": true, 00:21:25.256 "num_base_bdevs": 2, 00:21:25.256 "num_base_bdevs_discovered": 1, 00:21:25.256 "num_base_bdevs_operational": 1, 00:21:25.256 "base_bdevs_list": [ 00:21:25.256 { 00:21:25.256 "name": null, 00:21:25.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.256 "is_configured": false, 00:21:25.256 "data_offset": 2048, 00:21:25.256 "data_size": 63488 00:21:25.256 }, 00:21:25.256 { 00:21:25.256 "name": "BaseBdev2", 00:21:25.256 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:25.256 "is_configured": true, 00:21:25.256 "data_offset": 2048, 00:21:25.256 "data_size": 63488 00:21:25.256 } 00:21:25.256 ] 00:21:25.256 }' 00:21:25.256 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.256 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.823 03:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.082 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.082 "name": "raid_bdev1", 00:21:26.082 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:26.082 "strip_size_kb": 0, 00:21:26.082 "state": "online", 00:21:26.082 "raid_level": "raid1", 00:21:26.082 "superblock": true, 00:21:26.082 "num_base_bdevs": 2, 00:21:26.082 "num_base_bdevs_discovered": 1, 00:21:26.082 "num_base_bdevs_operational": 1, 00:21:26.082 "base_bdevs_list": [ 00:21:26.082 { 00:21:26.082 "name": null, 00:21:26.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.082 "is_configured": false, 00:21:26.082 "data_offset": 2048, 00:21:26.082 "data_size": 63488 00:21:26.082 }, 00:21:26.082 { 00:21:26.082 "name": "BaseBdev2", 00:21:26.083 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:26.083 "is_configured": true, 00:21:26.083 "data_offset": 2048, 00:21:26.083 "data_size": 63488 00:21:26.083 } 00:21:26.083 ] 00:21:26.083 }' 00:21:26.083 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.083 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:26.083 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.083 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.083 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:26.341 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:26.600 [2024-05-15 03:15:57.623039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:26.600 [2024-05-15 03:15:57.623087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.600 [2024-05-15 03:15:57.623106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d5cd0 00:21:26.600 [2024-05-15 03:15:57.623121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.600 [2024-05-15 03:15:57.623476] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.600 [2024-05-15 03:15:57.623492] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:26.600 [2024-05-15 03:15:57.623557] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:26.600 [2024-05-15 03:15:57.623567] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:26.600 [2024-05-15 03:15:57.623574] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:26.600 BaseBdev1 00:21:26.600 03:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # sleep 1 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.537 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.796 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.796 "name": "raid_bdev1", 00:21:27.796 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:27.796 "strip_size_kb": 0, 00:21:27.796 "state": "online", 00:21:27.796 "raid_level": "raid1", 00:21:27.796 "superblock": true, 00:21:27.796 "num_base_bdevs": 2, 00:21:27.796 "num_base_bdevs_discovered": 1, 00:21:27.796 "num_base_bdevs_operational": 1, 00:21:27.796 "base_bdevs_list": [ 00:21:27.796 { 00:21:27.796 "name": null, 00:21:27.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.796 "is_configured": false, 00:21:27.796 "data_offset": 2048, 00:21:27.796 "data_size": 63488 00:21:27.796 }, 00:21:27.796 { 00:21:27.796 "name": "BaseBdev2", 00:21:27.796 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:27.796 "is_configured": true, 00:21:27.796 "data_offset": 2048, 00:21:27.796 "data_size": 63488 00:21:27.796 } 00:21:27.796 ] 00:21:27.796 }' 00:21:27.796 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.796 03:15:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.363 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.363 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.363 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:28.363 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:28.363 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.622 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.622 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.622 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.622 "name": "raid_bdev1", 00:21:28.622 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:28.622 "strip_size_kb": 0, 00:21:28.622 "state": "online", 00:21:28.622 "raid_level": "raid1", 00:21:28.622 "superblock": true, 00:21:28.622 "num_base_bdevs": 2, 00:21:28.622 "num_base_bdevs_discovered": 1, 00:21:28.622 "num_base_bdevs_operational": 1, 00:21:28.622 "base_bdevs_list": [ 00:21:28.622 { 00:21:28.622 "name": null, 00:21:28.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.622 "is_configured": false, 00:21:28.622 "data_offset": 2048, 00:21:28.622 "data_size": 63488 00:21:28.622 }, 00:21:28.622 { 00:21:28.622 "name": "BaseBdev2", 00:21:28.622 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:28.622 "is_configured": true, 00:21:28.622 "data_offset": 2048, 00:21:28.622 "data_size": 63488 00:21:28.622 } 00:21:28.622 ] 00:21:28.622 }' 00:21:28.622 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:28.880 03:15:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.139 [2024-05-15 03:16:00.102016] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.139 [2024-05-15 03:16:00.102145] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.139 [2024-05-15 03:16:00.102160] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:29.139 request: 00:21:29.139 { 00:21:29.139 "raid_bdev": "raid_bdev1", 00:21:29.139 "base_bdev": "BaseBdev1", 00:21:29.139 "method": "bdev_raid_add_base_bdev", 00:21:29.139 "req_id": 1 00:21:29.139 } 00:21:29.139 Got JSON-RPC error response 00:21:29.139 response: 00:21:29.139 { 00:21:29.139 "code": -22, 00:21:29.139 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:29.139 } 00:21:29.139 03:16:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:21:29.139 03:16:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.139 03:16:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.139 03:16:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.139 03:16:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # sleep 1 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.075 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.334 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.334 "name": "raid_bdev1", 00:21:30.334 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:30.334 "strip_size_kb": 0, 00:21:30.334 "state": "online", 00:21:30.334 "raid_level": "raid1", 00:21:30.334 "superblock": true, 00:21:30.334 "num_base_bdevs": 2, 00:21:30.334 "num_base_bdevs_discovered": 1, 00:21:30.334 "num_base_bdevs_operational": 1, 00:21:30.334 "base_bdevs_list": [ 00:21:30.334 { 00:21:30.334 "name": null, 00:21:30.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.334 "is_configured": false, 00:21:30.334 "data_offset": 2048, 00:21:30.334 "data_size": 63488 00:21:30.334 }, 00:21:30.334 { 00:21:30.334 "name": "BaseBdev2", 00:21:30.334 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:30.334 "is_configured": true, 00:21:30.334 "data_offset": 2048, 00:21:30.334 "data_size": 63488 00:21:30.334 } 00:21:30.334 ] 00:21:30.334 }' 00:21:30.334 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.334 03:16:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.903 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.162 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.162 "name": "raid_bdev1", 00:21:31.162 "uuid": "002b30e5-f25c-4c16-92e7-f00719ddab62", 00:21:31.162 "strip_size_kb": 0, 00:21:31.162 "state": "online", 00:21:31.162 "raid_level": "raid1", 00:21:31.162 "superblock": true, 00:21:31.162 "num_base_bdevs": 2, 00:21:31.162 "num_base_bdevs_discovered": 1, 00:21:31.162 "num_base_bdevs_operational": 1, 00:21:31.162 "base_bdevs_list": [ 00:21:31.162 { 00:21:31.162 "name": null, 00:21:31.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.162 "is_configured": false, 00:21:31.162 "data_offset": 2048, 00:21:31.162 "data_size": 63488 00:21:31.162 }, 00:21:31.162 { 00:21:31.162 "name": "BaseBdev2", 00:21:31.162 "uuid": "0582b04b-9b5b-5818-a090-d2b8772a13ab", 00:21:31.162 "is_configured": true, 00:21:31.162 "data_offset": 2048, 00:21:31.162 "data_size": 63488 00:21:31.162 } 00:21:31.162 ] 00:21:31.162 }' 00:21:31.162 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # killprocess 4164642 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 4164642 ']' 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 4164642 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4164642 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4164642' 00:21:31.421 killing process with pid 4164642 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 4164642 00:21:31.421 Received shutdown signal, test time was about 28.685145 seconds 00:21:31.421 00:21:31.421 Latency(us) 00:21:31.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.421 =================================================================================================================== 00:21:31.421 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.421 [2024-05-15 03:16:02.419369] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.421 [2024-05-15 03:16:02.419472] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.421 [2024-05-15 03:16:02.419521] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.421 [2024-05-15 03:16:02.419531] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17d3e40 name raid_bdev1, state offline 00:21:31.421 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 4164642 00:21:31.421 [2024-05-15 03:16:02.439641] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@797 -- # return 0 00:21:31.680 00:21:31.680 real 0m33.505s 00:21:31.680 user 0m53.792s 00:21:31.680 sys 0m3.735s 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.680 ************************************ 00:21:31.680 END TEST raid_rebuild_test_sb_io 00:21:31.680 ************************************ 00:21:31.680 03:16:02 bdev_raid -- bdev/bdev_raid.sh@822 -- # for n in 2 4 00:21:31.680 03:16:02 bdev_raid -- bdev/bdev_raid.sh@823 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:21:31.680 03:16:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:31.680 03:16:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:31.680 03:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.680 ************************************ 00:21:31.680 START TEST raid_rebuild_test 00:21:31.680 ************************************ 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false false true 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=4 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local superblock=false 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local verify=true 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev3 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # echo BaseBdev4 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # local strip_size 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@582 -- # local create_arg 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local data_offset 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # raid_pid=4170510 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # waitforlisten 4170510 /var/tmp/spdk-raid.sock 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 4170510 ']' 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.680 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.680 [2024-05-15 03:16:02.813220] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:31.680 [2024-05-15 03:16:02.813274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170510 ] 00:21:31.680 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.680 Zero copy mechanism will not be used. 00:21:31.940 [2024-05-15 03:16:02.911626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.940 [2024-05-15 03:16:03.005520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.940 [2024-05-15 03:16:03.069386] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.940 [2024-05-15 03:16:03.069419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.876 03:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.876 03:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:21:32.876 03:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:32.876 03:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.876 BaseBdev1_malloc 00:21:32.876 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:33.135 [2024-05-15 03:16:04.250708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:33.135 [2024-05-15 03:16:04.250751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.135 [2024-05-15 03:16:04.250769] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2356b00 00:21:33.135 [2024-05-15 03:16:04.250778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.135 [2024-05-15 03:16:04.252490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.135 [2024-05-15 03:16:04.252517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:33.135 BaseBdev1 00:21:33.135 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:33.135 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:33.393 BaseBdev2_malloc 00:21:33.393 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:33.652 [2024-05-15 03:16:04.744697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:33.652 [2024-05-15 03:16:04.744738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.652 [2024-05-15 03:16:04.744753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fc860 00:21:33.652 [2024-05-15 03:16:04.744763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.652 [2024-05-15 03:16:04.746294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.652 [2024-05-15 03:16:04.746320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:33.652 BaseBdev2 00:21:33.652 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:33.652 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:33.911 BaseBdev3_malloc 00:21:33.911 03:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:34.170 [2024-05-15 03:16:05.254633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:34.170 [2024-05-15 03:16:05.254676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.170 [2024-05-15 03:16:05.254692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fe080 00:21:34.170 [2024-05-15 03:16:05.254701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.170 [2024-05-15 03:16:05.256239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.170 [2024-05-15 03:16:05.256264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:34.170 BaseBdev3 00:21:34.170 03:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:34.170 03:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:34.428 BaseBdev4_malloc 00:21:34.428 03:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:34.687 [2024-05-15 03:16:05.744353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:34.687 [2024-05-15 03:16:05.744394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.687 [2024-05-15 03:16:05.744416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24fcf20 00:21:34.687 [2024-05-15 03:16:05.744426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.687 [2024-05-15 03:16:05.745931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.687 [2024-05-15 03:16:05.745956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:34.687 BaseBdev4 00:21:34.687 03:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:34.946 spare_malloc 00:21:34.946 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:35.205 spare_delay 00:21:35.205 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:35.464 [2024-05-15 03:16:06.498794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:35.464 [2024-05-15 03:16:06.498835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.464 [2024-05-15 03:16:06.498861] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x234fe10 00:21:35.464 [2024-05-15 03:16:06.498871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.464 [2024-05-15 03:16:06.500468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.464 [2024-05-15 03:16:06.500494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:35.464 spare 00:21:35.464 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:35.723 [2024-05-15 03:16:06.751482] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.723 [2024-05-15 03:16:06.752842] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.723 [2024-05-15 03:16:06.752906] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.724 [2024-05-15 03:16:06.752954] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.724 [2024-05-15 03:16:06.753032] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x23506b0 00:21:35.724 [2024-05-15 03:16:06.753041] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:35.724 [2024-05-15 03:16:06.753259] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2354be0 00:21:35.724 [2024-05-15 03:16:06.753416] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23506b0 00:21:35.724 [2024-05-15 03:16:06.753425] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23506b0 00:21:35.724 [2024-05-15 03:16:06.753543] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.724 03:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.982 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.982 "name": "raid_bdev1", 00:21:35.982 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:35.982 "strip_size_kb": 0, 00:21:35.982 "state": "online", 00:21:35.982 "raid_level": "raid1", 00:21:35.982 "superblock": false, 00:21:35.982 "num_base_bdevs": 4, 00:21:35.982 "num_base_bdevs_discovered": 4, 00:21:35.982 "num_base_bdevs_operational": 4, 00:21:35.982 "base_bdevs_list": [ 00:21:35.982 { 00:21:35.982 "name": "BaseBdev1", 00:21:35.982 "uuid": "d4a64a95-f628-513a-8929-1bb360980201", 00:21:35.982 "is_configured": true, 00:21:35.982 "data_offset": 0, 00:21:35.982 "data_size": 65536 00:21:35.982 }, 00:21:35.982 { 00:21:35.982 "name": "BaseBdev2", 00:21:35.982 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:35.982 "is_configured": true, 00:21:35.982 "data_offset": 0, 00:21:35.982 "data_size": 65536 00:21:35.982 }, 00:21:35.982 { 00:21:35.982 "name": "BaseBdev3", 00:21:35.982 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:35.982 "is_configured": true, 00:21:35.982 "data_offset": 0, 00:21:35.982 "data_size": 65536 00:21:35.982 }, 00:21:35.982 { 00:21:35.982 "name": "BaseBdev4", 00:21:35.982 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:35.982 "is_configured": true, 00:21:35.982 "data_offset": 0, 00:21:35.982 "data_size": 65536 00:21:35.982 } 00:21:35.982 ] 00:21:35.982 }' 00:21:35.982 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.982 03:16:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.548 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:36.548 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:21:36.807 [2024-05-15 03:16:07.878764] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.807 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=65536 00:21:36.807 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.807 03:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # data_offset=0 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:37.065 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:37.066 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.066 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:37.358 [2024-05-15 03:16:08.379863] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2354ca0 00:21:37.358 /dev/nbd0 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.358 1+0 records in 00:21:37.358 1+0 records out 00:21:37.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233629 s, 17.5 MB/s 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:21:37.358 03:16:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:45.490 65536+0 records in 00:21:45.490 65536+0 records out 00:21:45.490 33554432 bytes (34 MB, 32 MiB) copied, 7.07888 s, 4.7 MB/s 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.490 [2024-05-15 03:16:15.710585] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:45.490 [2024-05-15 03:16:15.855010] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.490 03:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.490 03:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.490 "name": "raid_bdev1", 00:21:45.490 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:45.490 "strip_size_kb": 0, 00:21:45.490 "state": "online", 00:21:45.490 "raid_level": "raid1", 00:21:45.490 "superblock": false, 00:21:45.490 "num_base_bdevs": 4, 00:21:45.490 "num_base_bdevs_discovered": 3, 00:21:45.490 "num_base_bdevs_operational": 3, 00:21:45.490 "base_bdevs_list": [ 00:21:45.490 { 00:21:45.490 "name": null, 00:21:45.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.490 "is_configured": false, 00:21:45.490 "data_offset": 0, 00:21:45.490 "data_size": 65536 00:21:45.490 }, 00:21:45.490 { 00:21:45.490 "name": "BaseBdev2", 00:21:45.490 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:45.490 "is_configured": true, 00:21:45.490 "data_offset": 0, 00:21:45.490 "data_size": 65536 00:21:45.490 }, 00:21:45.490 { 00:21:45.490 "name": "BaseBdev3", 00:21:45.490 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:45.490 "is_configured": true, 00:21:45.490 "data_offset": 0, 00:21:45.490 "data_size": 65536 00:21:45.490 }, 00:21:45.490 { 00:21:45.490 "name": "BaseBdev4", 00:21:45.490 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:45.490 "is_configured": true, 00:21:45.490 "data_offset": 0, 00:21:45.490 "data_size": 65536 00:21:45.490 } 00:21:45.490 ] 00:21:45.490 }' 00:21:45.490 03:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.490 03:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.749 03:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.007 [2024-05-15 03:16:16.921882] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.007 [2024-05-15 03:16:16.925811] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2354ca0 00:21:46.007 [2024-05-15 03:16:16.927943] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.007 03:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # sleep 1 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.944 03:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.203 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:47.203 "name": "raid_bdev1", 00:21:47.203 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:47.203 "strip_size_kb": 0, 00:21:47.203 "state": "online", 00:21:47.203 "raid_level": "raid1", 00:21:47.203 "superblock": false, 00:21:47.203 "num_base_bdevs": 4, 00:21:47.203 "num_base_bdevs_discovered": 4, 00:21:47.203 "num_base_bdevs_operational": 4, 00:21:47.203 "process": { 00:21:47.203 "type": "rebuild", 00:21:47.203 "target": "spare", 00:21:47.203 "progress": { 00:21:47.203 "blocks": 24576, 00:21:47.203 "percent": 37 00:21:47.203 } 00:21:47.203 }, 00:21:47.203 "base_bdevs_list": [ 00:21:47.203 { 00:21:47.203 "name": "spare", 00:21:47.203 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 0, 00:21:47.203 "data_size": 65536 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev2", 00:21:47.203 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 0, 00:21:47.203 "data_size": 65536 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev3", 00:21:47.203 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 0, 00:21:47.203 "data_size": 65536 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev4", 00:21:47.203 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 0, 00:21:47.203 "data_size": 65536 00:21:47.203 } 00:21:47.203 ] 00:21:47.203 }' 00:21:47.203 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:47.203 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.203 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:47.203 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.204 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:47.462 [2024-05-15 03:16:18.532999] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.462 [2024-05-15 03:16:18.540194] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:47.462 [2024-05-15 03:16:18.540243] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.462 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.720 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.720 "name": "raid_bdev1", 00:21:47.720 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:47.720 "strip_size_kb": 0, 00:21:47.720 "state": "online", 00:21:47.720 "raid_level": "raid1", 00:21:47.720 "superblock": false, 00:21:47.720 "num_base_bdevs": 4, 00:21:47.720 "num_base_bdevs_discovered": 3, 00:21:47.720 "num_base_bdevs_operational": 3, 00:21:47.720 "base_bdevs_list": [ 00:21:47.720 { 00:21:47.720 "name": null, 00:21:47.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.720 "is_configured": false, 00:21:47.720 "data_offset": 0, 00:21:47.720 "data_size": 65536 00:21:47.721 }, 00:21:47.721 { 00:21:47.721 "name": "BaseBdev2", 00:21:47.721 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:47.721 "is_configured": true, 00:21:47.721 "data_offset": 0, 00:21:47.721 "data_size": 65536 00:21:47.721 }, 00:21:47.721 { 00:21:47.721 "name": "BaseBdev3", 00:21:47.721 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:47.721 "is_configured": true, 00:21:47.721 "data_offset": 0, 00:21:47.721 "data_size": 65536 00:21:47.721 }, 00:21:47.721 { 00:21:47.721 "name": "BaseBdev4", 00:21:47.721 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:47.721 "is_configured": true, 00:21:47.721 "data_offset": 0, 00:21:47.721 "data_size": 65536 00:21:47.721 } 00:21:47.721 ] 00:21:47.721 }' 00:21:47.721 03:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.721 03:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.656 "name": "raid_bdev1", 00:21:48.656 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:48.656 "strip_size_kb": 0, 00:21:48.656 "state": "online", 00:21:48.656 "raid_level": "raid1", 00:21:48.656 "superblock": false, 00:21:48.656 "num_base_bdevs": 4, 00:21:48.656 "num_base_bdevs_discovered": 3, 00:21:48.656 "num_base_bdevs_operational": 3, 00:21:48.656 "base_bdevs_list": [ 00:21:48.656 { 00:21:48.656 "name": null, 00:21:48.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.656 "is_configured": false, 00:21:48.656 "data_offset": 0, 00:21:48.656 "data_size": 65536 00:21:48.656 }, 00:21:48.656 { 00:21:48.656 "name": "BaseBdev2", 00:21:48.656 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:48.656 "is_configured": true, 00:21:48.656 "data_offset": 0, 00:21:48.656 "data_size": 65536 00:21:48.656 }, 00:21:48.656 { 00:21:48.656 "name": "BaseBdev3", 00:21:48.656 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:48.656 "is_configured": true, 00:21:48.656 "data_offset": 0, 00:21:48.656 "data_size": 65536 00:21:48.656 }, 00:21:48.656 { 00:21:48.656 "name": "BaseBdev4", 00:21:48.656 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:48.656 "is_configured": true, 00:21:48.656 "data_offset": 0, 00:21:48.656 "data_size": 65536 00:21:48.656 } 00:21:48.656 ] 00:21:48.656 }' 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:48.656 03:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:48.915 [2024-05-15 03:16:20.036238] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.915 [2024-05-15 03:16:20.040322] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2352f10 00:21:48.915 [2024-05-15 03:16:20.041911] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.915 03:16:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # sleep 1 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.292 "name": "raid_bdev1", 00:21:50.292 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:50.292 "strip_size_kb": 0, 00:21:50.292 "state": "online", 00:21:50.292 "raid_level": "raid1", 00:21:50.292 "superblock": false, 00:21:50.292 "num_base_bdevs": 4, 00:21:50.292 "num_base_bdevs_discovered": 4, 00:21:50.292 "num_base_bdevs_operational": 4, 00:21:50.292 "process": { 00:21:50.292 "type": "rebuild", 00:21:50.292 "target": "spare", 00:21:50.292 "progress": { 00:21:50.292 "blocks": 24576, 00:21:50.292 "percent": 37 00:21:50.292 } 00:21:50.292 }, 00:21:50.292 "base_bdevs_list": [ 00:21:50.292 { 00:21:50.292 "name": "spare", 00:21:50.292 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:50.292 "is_configured": true, 00:21:50.292 "data_offset": 0, 00:21:50.292 "data_size": 65536 00:21:50.292 }, 00:21:50.292 { 00:21:50.292 "name": "BaseBdev2", 00:21:50.292 "uuid": "f899f447-f297-56e1-ad62-a49e0bf41891", 00:21:50.292 "is_configured": true, 00:21:50.292 "data_offset": 0, 00:21:50.292 "data_size": 65536 00:21:50.292 }, 00:21:50.292 { 00:21:50.292 "name": "BaseBdev3", 00:21:50.292 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:50.292 "is_configured": true, 00:21:50.292 "data_offset": 0, 00:21:50.292 "data_size": 65536 00:21:50.292 }, 00:21:50.292 { 00:21:50.292 "name": "BaseBdev4", 00:21:50.292 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:50.292 "is_configured": true, 00:21:50.292 "data_offset": 0, 00:21:50.292 "data_size": 65536 00:21:50.292 } 00:21:50.292 ] 00:21:50.292 }' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # '[' false = true ']' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=4 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # '[' 4 -gt 2 ']' 00:21:50.292 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@700 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:50.551 [2024-05-15 03:16:21.646135] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:50.551 [2024-05-15 03:16:21.654117] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2352f10 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@703 -- # base_bdevs[1]= 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@704 -- # (( num_base_bdevs_operational-- )) 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.551 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.810 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.810 "name": "raid_bdev1", 00:21:50.810 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:50.810 "strip_size_kb": 0, 00:21:50.810 "state": "online", 00:21:50.810 "raid_level": "raid1", 00:21:50.810 "superblock": false, 00:21:50.810 "num_base_bdevs": 4, 00:21:50.810 "num_base_bdevs_discovered": 3, 00:21:50.810 "num_base_bdevs_operational": 3, 00:21:50.810 "process": { 00:21:50.810 "type": "rebuild", 00:21:50.810 "target": "spare", 00:21:50.810 "progress": { 00:21:50.810 "blocks": 36864, 00:21:50.810 "percent": 56 00:21:50.810 } 00:21:50.810 }, 00:21:50.810 "base_bdevs_list": [ 00:21:50.810 { 00:21:50.810 "name": "spare", 00:21:50.810 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:50.810 "is_configured": true, 00:21:50.810 "data_offset": 0, 00:21:50.810 "data_size": 65536 00:21:50.810 }, 00:21:50.810 { 00:21:50.810 "name": null, 00:21:50.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.810 "is_configured": false, 00:21:50.810 "data_offset": 0, 00:21:50.810 "data_size": 65536 00:21:50.810 }, 00:21:50.810 { 00:21:50.810 "name": "BaseBdev3", 00:21:50.810 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:50.810 "is_configured": true, 00:21:50.810 "data_offset": 0, 00:21:50.810 "data_size": 65536 00:21:50.810 }, 00:21:50.810 { 00:21:50.810 "name": "BaseBdev4", 00:21:50.810 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:50.810 "is_configured": true, 00:21:50.810 "data_offset": 0, 00:21:50.810 "data_size": 65536 00:21:50.810 } 00:21:50.810 ] 00:21:50.810 }' 00:21:50.810 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.068 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.068 03:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # local timeout=759 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.068 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.327 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.327 "name": "raid_bdev1", 00:21:51.327 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:51.327 "strip_size_kb": 0, 00:21:51.327 "state": "online", 00:21:51.327 "raid_level": "raid1", 00:21:51.327 "superblock": false, 00:21:51.328 "num_base_bdevs": 4, 00:21:51.328 "num_base_bdevs_discovered": 3, 00:21:51.328 "num_base_bdevs_operational": 3, 00:21:51.328 "process": { 00:21:51.328 "type": "rebuild", 00:21:51.328 "target": "spare", 00:21:51.328 "progress": { 00:21:51.328 "blocks": 45056, 00:21:51.328 "percent": 68 00:21:51.328 } 00:21:51.328 }, 00:21:51.328 "base_bdevs_list": [ 00:21:51.328 { 00:21:51.328 "name": "spare", 00:21:51.328 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:51.328 "is_configured": true, 00:21:51.328 "data_offset": 0, 00:21:51.328 "data_size": 65536 00:21:51.328 }, 00:21:51.328 { 00:21:51.328 "name": null, 00:21:51.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.328 "is_configured": false, 00:21:51.328 "data_offset": 0, 00:21:51.328 "data_size": 65536 00:21:51.328 }, 00:21:51.328 { 00:21:51.328 "name": "BaseBdev3", 00:21:51.328 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:51.328 "is_configured": true, 00:21:51.328 "data_offset": 0, 00:21:51.328 "data_size": 65536 00:21:51.328 }, 00:21:51.328 { 00:21:51.328 "name": "BaseBdev4", 00:21:51.328 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:51.328 "is_configured": true, 00:21:51.328 "data_offset": 0, 00:21:51.328 "data_size": 65536 00:21:51.328 } 00:21:51.328 ] 00:21:51.328 }' 00:21:51.328 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.328 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.328 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.328 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.328 03:16:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # sleep 1 00:21:52.263 [2024-05-15 03:16:23.266258] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:52.263 [2024-05-15 03:16:23.266315] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:52.263 [2024-05-15 03:16:23.266349] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.263 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.521 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.521 "name": "raid_bdev1", 00:21:52.521 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:52.521 "strip_size_kb": 0, 00:21:52.521 "state": "online", 00:21:52.521 "raid_level": "raid1", 00:21:52.521 "superblock": false, 00:21:52.521 "num_base_bdevs": 4, 00:21:52.521 "num_base_bdevs_discovered": 3, 00:21:52.521 "num_base_bdevs_operational": 3, 00:21:52.521 "base_bdevs_list": [ 00:21:52.521 { 00:21:52.521 "name": "spare", 00:21:52.521 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:52.521 "is_configured": true, 00:21:52.521 "data_offset": 0, 00:21:52.521 "data_size": 65536 00:21:52.521 }, 00:21:52.521 { 00:21:52.521 "name": null, 00:21:52.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.521 "is_configured": false, 00:21:52.521 "data_offset": 0, 00:21:52.521 "data_size": 65536 00:21:52.521 }, 00:21:52.521 { 00:21:52.521 "name": "BaseBdev3", 00:21:52.521 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:52.521 "is_configured": true, 00:21:52.521 "data_offset": 0, 00:21:52.521 "data_size": 65536 00:21:52.521 }, 00:21:52.521 { 00:21:52.521 "name": "BaseBdev4", 00:21:52.521 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:52.521 "is_configured": true, 00:21:52.521 "data_offset": 0, 00:21:52.521 "data_size": 65536 00:21:52.521 } 00:21:52.521 ] 00:21:52.521 }' 00:21:52.521 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # break 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.778 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:52.779 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:52.779 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.779 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.779 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.037 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.037 "name": "raid_bdev1", 00:21:53.037 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:53.037 "strip_size_kb": 0, 00:21:53.037 "state": "online", 00:21:53.037 "raid_level": "raid1", 00:21:53.037 "superblock": false, 00:21:53.037 "num_base_bdevs": 4, 00:21:53.037 "num_base_bdevs_discovered": 3, 00:21:53.037 "num_base_bdevs_operational": 3, 00:21:53.037 "base_bdevs_list": [ 00:21:53.037 { 00:21:53.037 "name": "spare", 00:21:53.037 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:53.037 "is_configured": true, 00:21:53.037 "data_offset": 0, 00:21:53.037 "data_size": 65536 00:21:53.037 }, 00:21:53.037 { 00:21:53.037 "name": null, 00:21:53.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.037 "is_configured": false, 00:21:53.037 "data_offset": 0, 00:21:53.037 "data_size": 65536 00:21:53.037 }, 00:21:53.037 { 00:21:53.037 "name": "BaseBdev3", 00:21:53.037 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:53.037 "is_configured": true, 00:21:53.037 "data_offset": 0, 00:21:53.037 "data_size": 65536 00:21:53.037 }, 00:21:53.037 { 00:21:53.037 "name": "BaseBdev4", 00:21:53.037 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:53.037 "is_configured": true, 00:21:53.037 "data_offset": 0, 00:21:53.037 "data_size": 65536 00:21:53.037 } 00:21:53.037 ] 00:21:53.037 }' 00:21:53.037 03:16:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.037 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.296 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.296 "name": "raid_bdev1", 00:21:53.296 "uuid": "8849c31b-349a-4e04-91ab-0d8642b68906", 00:21:53.296 "strip_size_kb": 0, 00:21:53.296 "state": "online", 00:21:53.296 "raid_level": "raid1", 00:21:53.296 "superblock": false, 00:21:53.296 "num_base_bdevs": 4, 00:21:53.296 "num_base_bdevs_discovered": 3, 00:21:53.296 "num_base_bdevs_operational": 3, 00:21:53.296 "base_bdevs_list": [ 00:21:53.296 { 00:21:53.296 "name": "spare", 00:21:53.296 "uuid": "364d93de-2206-57cf-8487-ab67fa9a1390", 00:21:53.296 "is_configured": true, 00:21:53.296 "data_offset": 0, 00:21:53.296 "data_size": 65536 00:21:53.296 }, 00:21:53.296 { 00:21:53.296 "name": null, 00:21:53.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.296 "is_configured": false, 00:21:53.296 "data_offset": 0, 00:21:53.296 "data_size": 65536 00:21:53.296 }, 00:21:53.296 { 00:21:53.297 "name": "BaseBdev3", 00:21:53.297 "uuid": "cbd24732-83c3-5336-855b-3b32670661f5", 00:21:53.297 "is_configured": true, 00:21:53.297 "data_offset": 0, 00:21:53.297 "data_size": 65536 00:21:53.297 }, 00:21:53.297 { 00:21:53.297 "name": "BaseBdev4", 00:21:53.297 "uuid": "20ffd5d3-1a14-5a55-b3dd-518bd90267fb", 00:21:53.297 "is_configured": true, 00:21:53.297 "data_offset": 0, 00:21:53.297 "data_size": 65536 00:21:53.297 } 00:21:53.297 ] 00:21:53.297 }' 00:21:53.297 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.297 03:16:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.863 03:16:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:54.121 [2024-05-15 03:16:25.190907] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.121 [2024-05-15 03:16:25.190934] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.121 [2024-05-15 03:16:25.190990] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.121 [2024-05-15 03:16:25.191063] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.121 [2024-05-15 03:16:25.191072] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23506b0 name raid_bdev1, state offline 00:21:54.121 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.121 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # jq length 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.379 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.380 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.380 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:54.380 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.380 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.380 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:54.637 /dev/nbd0 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.637 1+0 records in 00:21:54.637 1+0 records out 00:21:54.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198594 s, 20.6 MB/s 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.637 03:16:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:54.896 /dev/nbd1 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.896 1+0 records in 00:21:54.896 1+0 records out 00:21:54.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024576 s, 16.7 MB/s 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.896 03:16:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.154 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.413 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@748 -- # '[' false = true ']' 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@795 -- # killprocess 4170510 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 4170510 ']' 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 4170510 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4170510 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4170510' 00:21:55.672 killing process with pid 4170510 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 4170510 00:21:55.672 Received shutdown signal, test time was about 60.000000 seconds 00:21:55.672 00:21:55.672 Latency(us) 00:21:55.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.672 =================================================================================================================== 00:21:55.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:55.672 [2024-05-15 03:16:26.715292] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:55.672 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 4170510 00:21:55.672 [2024-05-15 03:16:26.757900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:55.931 03:16:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@797 -- # return 0 00:21:55.931 00:21:55.931 real 0m24.234s 00:21:55.931 user 0m34.119s 00:21:55.931 sys 0m3.954s 00:21:55.931 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:55.931 03:16:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.931 ************************************ 00:21:55.931 END TEST raid_rebuild_test 00:21:55.931 ************************************ 00:21:55.931 03:16:27 bdev_raid -- bdev/bdev_raid.sh@824 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:21:55.931 03:16:27 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:55.931 03:16:27 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:55.931 03:16:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:55.931 ************************************ 00:21:55.931 START TEST raid_rebuild_test_sb 00:21:55.931 ************************************ 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true false true 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=4 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local verify=true 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev3 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # echo BaseBdev4 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:21:55.931 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # local strip_size 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # local create_arg 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local data_offset 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # raid_pid=4174802 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # waitforlisten 4174802 /var/tmp/spdk-raid.sock 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 4174802 ']' 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:55.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.932 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.190 [2024-05-15 03:16:27.125769] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:56.190 [2024-05-15 03:16:27.125822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174802 ] 00:21:56.190 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:56.190 Zero copy mechanism will not be used. 00:21:56.190 [2024-05-15 03:16:27.222801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.190 [2024-05-15 03:16:27.317074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.449 [2024-05-15 03:16:27.371396] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:56.449 [2024-05-15 03:16:27.371425] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.042 03:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.042 03:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:57.042 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:57.042 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:57.300 BaseBdev1_malloc 00:21:57.300 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:57.557 [2024-05-15 03:16:28.567661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:57.557 [2024-05-15 03:16:28.567704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.557 [2024-05-15 03:16:28.567723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b6b00 00:21:57.557 [2024-05-15 03:16:28.567733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.557 [2024-05-15 03:16:28.569449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.557 [2024-05-15 03:16:28.569476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:57.557 BaseBdev1 00:21:57.557 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:57.557 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:57.816 BaseBdev2_malloc 00:21:57.816 03:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:58.075 [2024-05-15 03:16:29.081690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:58.075 [2024-05-15 03:16:29.081731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.075 [2024-05-15 03:16:29.081747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275c860 00:21:58.075 [2024-05-15 03:16:29.081757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.075 [2024-05-15 03:16:29.083310] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.075 [2024-05-15 03:16:29.083335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:58.075 BaseBdev2 00:21:58.075 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:58.075 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:58.334 BaseBdev3_malloc 00:21:58.334 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:58.592 [2024-05-15 03:16:29.595640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:58.592 [2024-05-15 03:16:29.595683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.592 [2024-05-15 03:16:29.595699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275e080 00:21:58.592 [2024-05-15 03:16:29.595709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.592 [2024-05-15 03:16:29.597276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.592 [2024-05-15 03:16:29.597302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:58.592 BaseBdev3 00:21:58.592 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:21:58.592 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:58.851 BaseBdev4_malloc 00:21:58.851 03:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:59.109 [2024-05-15 03:16:30.101423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:59.109 [2024-05-15 03:16:30.101470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.109 [2024-05-15 03:16:30.101489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275cf20 00:21:59.109 [2024-05-15 03:16:30.101498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.109 [2024-05-15 03:16:30.103140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.109 [2024-05-15 03:16:30.103164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:59.109 BaseBdev4 00:21:59.109 03:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:59.367 spare_malloc 00:21:59.367 03:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:59.626 spare_delay 00:21:59.626 03:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:59.884 [2024-05-15 03:16:30.863889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:59.884 [2024-05-15 03:16:30.863930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.884 [2024-05-15 03:16:30.863957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25afe10 00:21:59.884 [2024-05-15 03:16:30.863968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.884 [2024-05-15 03:16:30.865575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.884 [2024-05-15 03:16:30.865602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:59.884 spare 00:21:59.884 03:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:00.143 [2024-05-15 03:16:31.116584] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.143 [2024-05-15 03:16:31.117932] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.143 [2024-05-15 03:16:31.117989] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.143 [2024-05-15 03:16:31.118036] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:00.143 [2024-05-15 03:16:31.118228] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x25b06b0 00:22:00.143 [2024-05-15 03:16:31.118238] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:00.143 [2024-05-15 03:16:31.118439] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b1bb0 00:22:00.143 [2024-05-15 03:16:31.118598] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25b06b0 00:22:00.143 [2024-05-15 03:16:31.118606] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25b06b0 00:22:00.143 [2024-05-15 03:16:31.118703] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.143 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.402 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:00.402 "name": "raid_bdev1", 00:22:00.402 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:00.402 "strip_size_kb": 0, 00:22:00.402 "state": "online", 00:22:00.402 "raid_level": "raid1", 00:22:00.402 "superblock": true, 00:22:00.402 "num_base_bdevs": 4, 00:22:00.402 "num_base_bdevs_discovered": 4, 00:22:00.402 "num_base_bdevs_operational": 4, 00:22:00.402 "base_bdevs_list": [ 00:22:00.402 { 00:22:00.402 "name": "BaseBdev1", 00:22:00.402 "uuid": "aac6e08a-6a6a-57d2-9045-ff73f048f522", 00:22:00.402 "is_configured": true, 00:22:00.402 "data_offset": 2048, 00:22:00.402 "data_size": 63488 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "name": "BaseBdev2", 00:22:00.402 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:00.402 "is_configured": true, 00:22:00.402 "data_offset": 2048, 00:22:00.402 "data_size": 63488 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "name": "BaseBdev3", 00:22:00.402 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:00.402 "is_configured": true, 00:22:00.402 "data_offset": 2048, 00:22:00.402 "data_size": 63488 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "name": "BaseBdev4", 00:22:00.402 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:00.402 "is_configured": true, 00:22:00.402 "data_offset": 2048, 00:22:00.402 "data_size": 63488 00:22:00.402 } 00:22:00.402 ] 00:22:00.402 }' 00:22:00.402 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:00.402 03:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.969 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:00.969 03:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:22:01.227 [2024-05-15 03:16:32.231836] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.227 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=63488 00:22:01.227 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.227 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:01.485 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # data_offset=2048 00:22:01.485 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:22:01.485 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:22:01.485 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:22:01.485 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.486 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:01.744 [2024-05-15 03:16:32.740953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b1bb0 00:22:01.744 /dev/nbd0 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.744 1+0 records in 00:22:01.744 1+0 records out 00:22:01.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251082 s, 16.3 MB/s 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:22:01.744 03:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:09.857 63488+0 records in 00:22:09.857 63488+0 records out 00:22:09.857 32505856 bytes (33 MB, 31 MiB) copied, 6.88565 s, 4.7 MB/s 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:09.857 [2024-05-15 03:16:39.948717] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:09.857 03:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:09.857 [2024-05-15 03:16:40.190968] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.857 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.858 "name": "raid_bdev1", 00:22:09.858 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:09.858 "strip_size_kb": 0, 00:22:09.858 "state": "online", 00:22:09.858 "raid_level": "raid1", 00:22:09.858 "superblock": true, 00:22:09.858 "num_base_bdevs": 4, 00:22:09.858 "num_base_bdevs_discovered": 3, 00:22:09.858 "num_base_bdevs_operational": 3, 00:22:09.858 "base_bdevs_list": [ 00:22:09.858 { 00:22:09.858 "name": null, 00:22:09.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.858 "is_configured": false, 00:22:09.858 "data_offset": 2048, 00:22:09.858 "data_size": 63488 00:22:09.858 }, 00:22:09.858 { 00:22:09.858 "name": "BaseBdev2", 00:22:09.858 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:09.858 "is_configured": true, 00:22:09.858 "data_offset": 2048, 00:22:09.858 "data_size": 63488 00:22:09.858 }, 00:22:09.858 { 00:22:09.858 "name": "BaseBdev3", 00:22:09.858 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:09.858 "is_configured": true, 00:22:09.858 "data_offset": 2048, 00:22:09.858 "data_size": 63488 00:22:09.858 }, 00:22:09.858 { 00:22:09.858 "name": "BaseBdev4", 00:22:09.858 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:09.858 "is_configured": true, 00:22:09.858 "data_offset": 2048, 00:22:09.858 "data_size": 63488 00:22:09.858 } 00:22:09.858 ] 00:22:09.858 }' 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.858 03:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.115 03:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.374 [2024-05-15 03:16:41.314019] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.374 [2024-05-15 03:16:41.317957] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x275bdf0 00:22:10.374 [2024-05-15 03:16:41.320056] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.374 03:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # sleep 1 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.309 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.567 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.567 "name": "raid_bdev1", 00:22:11.567 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:11.567 "strip_size_kb": 0, 00:22:11.567 "state": "online", 00:22:11.567 "raid_level": "raid1", 00:22:11.567 "superblock": true, 00:22:11.567 "num_base_bdevs": 4, 00:22:11.567 "num_base_bdevs_discovered": 4, 00:22:11.567 "num_base_bdevs_operational": 4, 00:22:11.567 "process": { 00:22:11.567 "type": "rebuild", 00:22:11.567 "target": "spare", 00:22:11.567 "progress": { 00:22:11.567 "blocks": 24576, 00:22:11.567 "percent": 38 00:22:11.567 } 00:22:11.567 }, 00:22:11.567 "base_bdevs_list": [ 00:22:11.567 { 00:22:11.567 "name": "spare", 00:22:11.567 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:11.567 "is_configured": true, 00:22:11.567 "data_offset": 2048, 00:22:11.567 "data_size": 63488 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "name": "BaseBdev2", 00:22:11.568 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:11.568 "is_configured": true, 00:22:11.568 "data_offset": 2048, 00:22:11.568 "data_size": 63488 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "name": "BaseBdev3", 00:22:11.568 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:11.568 "is_configured": true, 00:22:11.568 "data_offset": 2048, 00:22:11.568 "data_size": 63488 00:22:11.568 }, 00:22:11.568 { 00:22:11.568 "name": "BaseBdev4", 00:22:11.568 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:11.568 "is_configured": true, 00:22:11.568 "data_offset": 2048, 00:22:11.568 "data_size": 63488 00:22:11.568 } 00:22:11.568 ] 00:22:11.568 }' 00:22:11.568 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.568 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.568 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.568 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.568 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:11.826 [2024-05-15 03:16:42.917139] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:11.826 [2024-05-15 03:16:42.932290] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:11.826 [2024-05-15 03:16:42.932329] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.826 03:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.084 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.084 "name": "raid_bdev1", 00:22:12.084 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:12.084 "strip_size_kb": 0, 00:22:12.084 "state": "online", 00:22:12.084 "raid_level": "raid1", 00:22:12.084 "superblock": true, 00:22:12.084 "num_base_bdevs": 4, 00:22:12.084 "num_base_bdevs_discovered": 3, 00:22:12.084 "num_base_bdevs_operational": 3, 00:22:12.084 "base_bdevs_list": [ 00:22:12.084 { 00:22:12.084 "name": null, 00:22:12.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.084 "is_configured": false, 00:22:12.084 "data_offset": 2048, 00:22:12.084 "data_size": 63488 00:22:12.084 }, 00:22:12.084 { 00:22:12.084 "name": "BaseBdev2", 00:22:12.084 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:12.084 "is_configured": true, 00:22:12.084 "data_offset": 2048, 00:22:12.084 "data_size": 63488 00:22:12.084 }, 00:22:12.084 { 00:22:12.084 "name": "BaseBdev3", 00:22:12.084 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:12.084 "is_configured": true, 00:22:12.084 "data_offset": 2048, 00:22:12.084 "data_size": 63488 00:22:12.084 }, 00:22:12.084 { 00:22:12.084 "name": "BaseBdev4", 00:22:12.084 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:12.084 "is_configured": true, 00:22:12.084 "data_offset": 2048, 00:22:12.084 "data_size": 63488 00:22:12.084 } 00:22:12.084 ] 00:22:12.084 }' 00:22:12.084 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.084 03:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.018 03:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.018 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.018 "name": "raid_bdev1", 00:22:13.018 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:13.018 "strip_size_kb": 0, 00:22:13.018 "state": "online", 00:22:13.018 "raid_level": "raid1", 00:22:13.018 "superblock": true, 00:22:13.018 "num_base_bdevs": 4, 00:22:13.018 "num_base_bdevs_discovered": 3, 00:22:13.018 "num_base_bdevs_operational": 3, 00:22:13.018 "base_bdevs_list": [ 00:22:13.018 { 00:22:13.018 "name": null, 00:22:13.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.018 "is_configured": false, 00:22:13.018 "data_offset": 2048, 00:22:13.018 "data_size": 63488 00:22:13.018 }, 00:22:13.018 { 00:22:13.018 "name": "BaseBdev2", 00:22:13.018 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:13.018 "is_configured": true, 00:22:13.018 "data_offset": 2048, 00:22:13.018 "data_size": 63488 00:22:13.018 }, 00:22:13.018 { 00:22:13.018 "name": "BaseBdev3", 00:22:13.018 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:13.018 "is_configured": true, 00:22:13.018 "data_offset": 2048, 00:22:13.018 "data_size": 63488 00:22:13.018 }, 00:22:13.018 { 00:22:13.018 "name": "BaseBdev4", 00:22:13.018 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:13.018 "is_configured": true, 00:22:13.018 "data_offset": 2048, 00:22:13.018 "data_size": 63488 00:22:13.018 } 00:22:13.018 ] 00:22:13.018 }' 00:22:13.018 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.018 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:13.018 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.276 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:13.276 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:13.276 [2024-05-15 03:16:44.428365] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.276 [2024-05-15 03:16:44.432374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b3060 00:22:13.277 [2024-05-15 03:16:44.433939] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.535 03:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # sleep 1 00:22:14.470 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.470 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.470 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.471 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.471 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.471 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.471 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.729 "name": "raid_bdev1", 00:22:14.729 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:14.729 "strip_size_kb": 0, 00:22:14.729 "state": "online", 00:22:14.729 "raid_level": "raid1", 00:22:14.729 "superblock": true, 00:22:14.729 "num_base_bdevs": 4, 00:22:14.729 "num_base_bdevs_discovered": 4, 00:22:14.729 "num_base_bdevs_operational": 4, 00:22:14.729 "process": { 00:22:14.729 "type": "rebuild", 00:22:14.729 "target": "spare", 00:22:14.729 "progress": { 00:22:14.729 "blocks": 24576, 00:22:14.729 "percent": 38 00:22:14.729 } 00:22:14.729 }, 00:22:14.729 "base_bdevs_list": [ 00:22:14.729 { 00:22:14.729 "name": "spare", 00:22:14.729 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:14.729 "is_configured": true, 00:22:14.729 "data_offset": 2048, 00:22:14.729 "data_size": 63488 00:22:14.729 }, 00:22:14.729 { 00:22:14.729 "name": "BaseBdev2", 00:22:14.729 "uuid": "78f97265-1be3-5237-9e31-434e73c4a879", 00:22:14.729 "is_configured": true, 00:22:14.729 "data_offset": 2048, 00:22:14.729 "data_size": 63488 00:22:14.729 }, 00:22:14.729 { 00:22:14.729 "name": "BaseBdev3", 00:22:14.729 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:14.729 "is_configured": true, 00:22:14.729 "data_offset": 2048, 00:22:14.729 "data_size": 63488 00:22:14.729 }, 00:22:14.729 { 00:22:14.729 "name": "BaseBdev4", 00:22:14.729 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:14.729 "is_configured": true, 00:22:14.729 "data_offset": 2048, 00:22:14.729 "data_size": 63488 00:22:14.729 } 00:22:14.729 ] 00:22:14.729 }' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:22:14.729 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=4 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # '[' 4 -gt 2 ']' 00:22:14.729 03:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@700 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:14.987 [2024-05-15 03:16:46.030154] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:14.987 [2024-05-15 03:16:46.046131] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x25b3060 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@703 -- # base_bdevs[1]= 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@704 -- # (( num_base_bdevs_operational-- )) 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.245 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.504 "name": "raid_bdev1", 00:22:15.504 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:15.504 "strip_size_kb": 0, 00:22:15.504 "state": "online", 00:22:15.504 "raid_level": "raid1", 00:22:15.504 "superblock": true, 00:22:15.504 "num_base_bdevs": 4, 00:22:15.504 "num_base_bdevs_discovered": 3, 00:22:15.504 "num_base_bdevs_operational": 3, 00:22:15.504 "process": { 00:22:15.504 "type": "rebuild", 00:22:15.504 "target": "spare", 00:22:15.504 "progress": { 00:22:15.504 "blocks": 38912, 00:22:15.504 "percent": 61 00:22:15.504 } 00:22:15.504 }, 00:22:15.504 "base_bdevs_list": [ 00:22:15.504 { 00:22:15.504 "name": "spare", 00:22:15.504 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:15.504 "is_configured": true, 00:22:15.504 "data_offset": 2048, 00:22:15.504 "data_size": 63488 00:22:15.504 }, 00:22:15.504 { 00:22:15.504 "name": null, 00:22:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.504 "is_configured": false, 00:22:15.504 "data_offset": 2048, 00:22:15.504 "data_size": 63488 00:22:15.504 }, 00:22:15.504 { 00:22:15.504 "name": "BaseBdev3", 00:22:15.504 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:15.504 "is_configured": true, 00:22:15.504 "data_offset": 2048, 00:22:15.504 "data_size": 63488 00:22:15.504 }, 00:22:15.504 { 00:22:15.504 "name": "BaseBdev4", 00:22:15.504 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:15.504 "is_configured": true, 00:22:15.504 "data_offset": 2048, 00:22:15.504 "data_size": 63488 00:22:15.504 } 00:22:15.504 ] 00:22:15.504 }' 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # local timeout=783 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.504 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.763 "name": "raid_bdev1", 00:22:15.763 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:15.763 "strip_size_kb": 0, 00:22:15.763 "state": "online", 00:22:15.763 "raid_level": "raid1", 00:22:15.763 "superblock": true, 00:22:15.763 "num_base_bdevs": 4, 00:22:15.763 "num_base_bdevs_discovered": 3, 00:22:15.763 "num_base_bdevs_operational": 3, 00:22:15.763 "process": { 00:22:15.763 "type": "rebuild", 00:22:15.763 "target": "spare", 00:22:15.763 "progress": { 00:22:15.763 "blocks": 47104, 00:22:15.763 "percent": 74 00:22:15.763 } 00:22:15.763 }, 00:22:15.763 "base_bdevs_list": [ 00:22:15.763 { 00:22:15.763 "name": "spare", 00:22:15.763 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:15.763 "is_configured": true, 00:22:15.763 "data_offset": 2048, 00:22:15.763 "data_size": 63488 00:22:15.763 }, 00:22:15.763 { 00:22:15.763 "name": null, 00:22:15.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.763 "is_configured": false, 00:22:15.763 "data_offset": 2048, 00:22:15.763 "data_size": 63488 00:22:15.763 }, 00:22:15.763 { 00:22:15.763 "name": "BaseBdev3", 00:22:15.763 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:15.763 "is_configured": true, 00:22:15.763 "data_offset": 2048, 00:22:15.763 "data_size": 63488 00:22:15.763 }, 00:22:15.763 { 00:22:15.763 "name": "BaseBdev4", 00:22:15.763 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:15.763 "is_configured": true, 00:22:15.763 "data_offset": 2048, 00:22:15.763 "data_size": 63488 00:22:15.763 } 00:22:15.763 ] 00:22:15.763 }' 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.763 03:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # sleep 1 00:22:16.724 [2024-05-15 03:16:47.557522] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:16.724 [2024-05-15 03:16:47.557579] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:16.724 [2024-05-15 03:16:47.557674] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.724 03:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.982 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.982 "name": "raid_bdev1", 00:22:16.982 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:16.982 "strip_size_kb": 0, 00:22:16.982 "state": "online", 00:22:16.982 "raid_level": "raid1", 00:22:16.982 "superblock": true, 00:22:16.982 "num_base_bdevs": 4, 00:22:16.982 "num_base_bdevs_discovered": 3, 00:22:16.982 "num_base_bdevs_operational": 3, 00:22:16.982 "base_bdevs_list": [ 00:22:16.982 { 00:22:16.982 "name": "spare", 00:22:16.982 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:16.982 "is_configured": true, 00:22:16.982 "data_offset": 2048, 00:22:16.982 "data_size": 63488 00:22:16.982 }, 00:22:16.982 { 00:22:16.982 "name": null, 00:22:16.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.982 "is_configured": false, 00:22:16.982 "data_offset": 2048, 00:22:16.982 "data_size": 63488 00:22:16.982 }, 00:22:16.982 { 00:22:16.982 "name": "BaseBdev3", 00:22:16.982 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:16.982 "is_configured": true, 00:22:16.982 "data_offset": 2048, 00:22:16.982 "data_size": 63488 00:22:16.982 }, 00:22:16.982 { 00:22:16.982 "name": "BaseBdev4", 00:22:16.982 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:16.982 "is_configured": true, 00:22:16.982 "data_offset": 2048, 00:22:16.983 "data_size": 63488 00:22:16.983 } 00:22:16.983 ] 00:22:16.983 }' 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # break 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.983 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.241 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.241 "name": "raid_bdev1", 00:22:17.241 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:17.241 "strip_size_kb": 0, 00:22:17.241 "state": "online", 00:22:17.241 "raid_level": "raid1", 00:22:17.241 "superblock": true, 00:22:17.241 "num_base_bdevs": 4, 00:22:17.241 "num_base_bdevs_discovered": 3, 00:22:17.241 "num_base_bdevs_operational": 3, 00:22:17.241 "base_bdevs_list": [ 00:22:17.241 { 00:22:17.241 "name": "spare", 00:22:17.241 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:17.241 "is_configured": true, 00:22:17.241 "data_offset": 2048, 00:22:17.241 "data_size": 63488 00:22:17.241 }, 00:22:17.241 { 00:22:17.241 "name": null, 00:22:17.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.241 "is_configured": false, 00:22:17.241 "data_offset": 2048, 00:22:17.241 "data_size": 63488 00:22:17.241 }, 00:22:17.241 { 00:22:17.241 "name": "BaseBdev3", 00:22:17.241 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:17.241 "is_configured": true, 00:22:17.241 "data_offset": 2048, 00:22:17.241 "data_size": 63488 00:22:17.241 }, 00:22:17.241 { 00:22:17.241 "name": "BaseBdev4", 00:22:17.241 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:17.241 "is_configured": true, 00:22:17.241 "data_offset": 2048, 00:22:17.241 "data_size": 63488 00:22:17.241 } 00:22:17.241 ] 00:22:17.241 }' 00:22:17.241 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.499 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.758 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.758 "name": "raid_bdev1", 00:22:17.758 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:17.758 "strip_size_kb": 0, 00:22:17.758 "state": "online", 00:22:17.758 "raid_level": "raid1", 00:22:17.758 "superblock": true, 00:22:17.758 "num_base_bdevs": 4, 00:22:17.758 "num_base_bdevs_discovered": 3, 00:22:17.758 "num_base_bdevs_operational": 3, 00:22:17.758 "base_bdevs_list": [ 00:22:17.758 { 00:22:17.758 "name": "spare", 00:22:17.758 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:17.758 "is_configured": true, 00:22:17.758 "data_offset": 2048, 00:22:17.758 "data_size": 63488 00:22:17.758 }, 00:22:17.758 { 00:22:17.758 "name": null, 00:22:17.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.758 "is_configured": false, 00:22:17.758 "data_offset": 2048, 00:22:17.758 "data_size": 63488 00:22:17.758 }, 00:22:17.758 { 00:22:17.758 "name": "BaseBdev3", 00:22:17.758 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:17.758 "is_configured": true, 00:22:17.758 "data_offset": 2048, 00:22:17.758 "data_size": 63488 00:22:17.758 }, 00:22:17.758 { 00:22:17.758 "name": "BaseBdev4", 00:22:17.758 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:17.758 "is_configured": true, 00:22:17.758 "data_offset": 2048, 00:22:17.758 "data_size": 63488 00:22:17.758 } 00:22:17.758 ] 00:22:17.758 }' 00:22:17.758 03:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.758 03:16:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.325 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:18.584 [2024-05-15 03:16:49.574642] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.584 [2024-05-15 03:16:49.574668] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.584 [2024-05-15 03:16:49.574722] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.584 [2024-05-15 03:16:49.574794] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.584 [2024-05-15 03:16:49.574803] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b06b0 name raid_bdev1, state offline 00:22:18.584 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.584 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # jq length 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.842 03:16:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:19.102 /dev/nbd0 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.102 1+0 records in 00:22:19.102 1+0 records out 00:22:19.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214833 s, 19.1 MB/s 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:19.102 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:19.361 /dev/nbd1 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.361 1+0 records in 00:22:19.361 1+0 records out 00:22:19.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245818 s, 16.7 MB/s 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.361 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.620 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:22:19.878 03:16:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:20.137 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.395 [2024-05-15 03:16:51.367732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.395 [2024-05-15 03:16:51.367778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.395 [2024-05-15 03:16:51.367795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275bc50 00:22:20.395 [2024-05-15 03:16:51.367806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.395 [2024-05-15 03:16:51.369510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.395 [2024-05-15 03:16:51.369539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.395 [2024-05-15 03:16:51.369609] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:20.395 [2024-05-15 03:16:51.369635] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.395 BaseBdev1 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z '' ']' 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # continue 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev3 ']' 00:22:20.395 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:20.654 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:20.912 [2024-05-15 03:16:51.865071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:20.912 [2024-05-15 03:16:51.865110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.912 [2024-05-15 03:16:51.865126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b31c0 00:22:20.912 [2024-05-15 03:16:51.865135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.912 [2024-05-15 03:16:51.865469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.912 [2024-05-15 03:16:51.865485] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:20.912 [2024-05-15 03:16:51.865542] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:20.912 [2024-05-15 03:16:51.865552] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:20.912 [2024-05-15 03:16:51.865559] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.912 [2024-05-15 03:16:51.865572] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b5300 name raid_bdev1, state configuring 00:22:20.912 [2024-05-15 03:16:51.865599] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:20.912 BaseBdev3 00:22:20.912 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:22:20.912 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev4 ']' 00:22:20.912 03:16:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:20.912 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:21.170 [2024-05-15 03:16:52.266158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:21.170 [2024-05-15 03:16:52.266193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.170 [2024-05-15 03:16:52.266210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b59e0 00:22:21.170 [2024-05-15 03:16:52.266220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.170 [2024-05-15 03:16:52.266548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.170 [2024-05-15 03:16:52.266563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:21.170 [2024-05-15 03:16:52.266617] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:21.170 [2024-05-15 03:16:52.266635] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:21.170 BaseBdev4 00:22:21.170 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:21.428 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:21.687 [2024-05-15 03:16:52.759481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:21.687 [2024-05-15 03:16:52.759515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.687 [2024-05-15 03:16:52.759531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25aec10 00:22:21.687 [2024-05-15 03:16:52.759541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.687 [2024-05-15 03:16:52.759907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.687 [2024-05-15 03:16:52.759923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:21.687 [2024-05-15 03:16:52.759994] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:21.687 [2024-05-15 03:16:52.760012] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.687 spare 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.687 03:16:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.945 [2024-05-15 03:16:52.860350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x25b55a0 00:22:21.945 [2024-05-15 03:16:52.860376] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:21.945 [2024-05-15 03:16:52.860590] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b2270 00:22:21.945 [2024-05-15 03:16:52.860757] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25b55a0 00:22:21.945 [2024-05-15 03:16:52.860766] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25b55a0 00:22:21.945 [2024-05-15 03:16:52.860893] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.945 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.945 "name": "raid_bdev1", 00:22:21.945 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:21.945 "strip_size_kb": 0, 00:22:21.945 "state": "online", 00:22:21.946 "raid_level": "raid1", 00:22:21.946 "superblock": true, 00:22:21.946 "num_base_bdevs": 4, 00:22:21.946 "num_base_bdevs_discovered": 3, 00:22:21.946 "num_base_bdevs_operational": 3, 00:22:21.946 "base_bdevs_list": [ 00:22:21.946 { 00:22:21.946 "name": "spare", 00:22:21.946 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:21.946 "is_configured": true, 00:22:21.946 "data_offset": 2048, 00:22:21.946 "data_size": 63488 00:22:21.946 }, 00:22:21.946 { 00:22:21.946 "name": null, 00:22:21.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.946 "is_configured": false, 00:22:21.946 "data_offset": 2048, 00:22:21.946 "data_size": 63488 00:22:21.946 }, 00:22:21.946 { 00:22:21.946 "name": "BaseBdev3", 00:22:21.946 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:21.946 "is_configured": true, 00:22:21.946 "data_offset": 2048, 00:22:21.946 "data_size": 63488 00:22:21.946 }, 00:22:21.946 { 00:22:21.946 "name": "BaseBdev4", 00:22:21.946 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:21.946 "is_configured": true, 00:22:21.946 "data_offset": 2048, 00:22:21.946 "data_size": 63488 00:22:21.946 } 00:22:21.946 ] 00:22:21.946 }' 00:22:21.946 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.946 03:16:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.511 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:22.768 "name": "raid_bdev1", 00:22:22.768 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:22.768 "strip_size_kb": 0, 00:22:22.768 "state": "online", 00:22:22.768 "raid_level": "raid1", 00:22:22.768 "superblock": true, 00:22:22.768 "num_base_bdevs": 4, 00:22:22.768 "num_base_bdevs_discovered": 3, 00:22:22.768 "num_base_bdevs_operational": 3, 00:22:22.768 "base_bdevs_list": [ 00:22:22.768 { 00:22:22.768 "name": "spare", 00:22:22.768 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:22.768 "is_configured": true, 00:22:22.768 "data_offset": 2048, 00:22:22.768 "data_size": 63488 00:22:22.768 }, 00:22:22.768 { 00:22:22.768 "name": null, 00:22:22.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.768 "is_configured": false, 00:22:22.768 "data_offset": 2048, 00:22:22.768 "data_size": 63488 00:22:22.768 }, 00:22:22.768 { 00:22:22.768 "name": "BaseBdev3", 00:22:22.768 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:22.768 "is_configured": true, 00:22:22.768 "data_offset": 2048, 00:22:22.768 "data_size": 63488 00:22:22.768 }, 00:22:22.768 { 00:22:22.768 "name": "BaseBdev4", 00:22:22.768 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:22.768 "is_configured": true, 00:22:22.768 "data_offset": 2048, 00:22:22.768 "data_size": 63488 00:22:22.768 } 00:22:22.768 ] 00:22:22.768 }' 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.768 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:23.026 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.026 03:16:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:23.285 [2024-05-15 03:16:54.203475] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.285 "name": "raid_bdev1", 00:22:23.285 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:23.285 "strip_size_kb": 0, 00:22:23.285 "state": "online", 00:22:23.285 "raid_level": "raid1", 00:22:23.285 "superblock": true, 00:22:23.285 "num_base_bdevs": 4, 00:22:23.285 "num_base_bdevs_discovered": 2, 00:22:23.285 "num_base_bdevs_operational": 2, 00:22:23.285 "base_bdevs_list": [ 00:22:23.285 { 00:22:23.285 "name": null, 00:22:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.285 "is_configured": false, 00:22:23.285 "data_offset": 2048, 00:22:23.285 "data_size": 63488 00:22:23.285 }, 00:22:23.285 { 00:22:23.285 "name": null, 00:22:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.285 "is_configured": false, 00:22:23.285 "data_offset": 2048, 00:22:23.285 "data_size": 63488 00:22:23.285 }, 00:22:23.285 { 00:22:23.285 "name": "BaseBdev3", 00:22:23.285 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:23.285 "is_configured": true, 00:22:23.285 "data_offset": 2048, 00:22:23.285 "data_size": 63488 00:22:23.285 }, 00:22:23.285 { 00:22:23.285 "name": "BaseBdev4", 00:22:23.285 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:23.285 "is_configured": true, 00:22:23.285 "data_offset": 2048, 00:22:23.285 "data_size": 63488 00:22:23.285 } 00:22:23.285 ] 00:22:23.285 }' 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.285 03:16:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.852 03:16:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.110 [2024-05-15 03:16:55.222215] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.110 [2024-05-15 03:16:55.222363] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:24.110 [2024-05-15 03:16:55.222377] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:24.110 [2024-05-15 03:16:55.222403] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.110 [2024-05-15 03:16:55.226226] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x275d5e0 00:22:24.110 [2024-05-15 03:16:55.227657] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.110 03:16:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # sleep 1 00:22:25.487 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.487 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.487 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.488 "name": "raid_bdev1", 00:22:25.488 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:25.488 "strip_size_kb": 0, 00:22:25.488 "state": "online", 00:22:25.488 "raid_level": "raid1", 00:22:25.488 "superblock": true, 00:22:25.488 "num_base_bdevs": 4, 00:22:25.488 "num_base_bdevs_discovered": 3, 00:22:25.488 "num_base_bdevs_operational": 3, 00:22:25.488 "process": { 00:22:25.488 "type": "rebuild", 00:22:25.488 "target": "spare", 00:22:25.488 "progress": { 00:22:25.488 "blocks": 24576, 00:22:25.488 "percent": 38 00:22:25.488 } 00:22:25.488 }, 00:22:25.488 "base_bdevs_list": [ 00:22:25.488 { 00:22:25.488 "name": "spare", 00:22:25.488 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:25.488 "is_configured": true, 00:22:25.488 "data_offset": 2048, 00:22:25.488 "data_size": 63488 00:22:25.488 }, 00:22:25.488 { 00:22:25.488 "name": null, 00:22:25.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.488 "is_configured": false, 00:22:25.488 "data_offset": 2048, 00:22:25.488 "data_size": 63488 00:22:25.488 }, 00:22:25.488 { 00:22:25.488 "name": "BaseBdev3", 00:22:25.488 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:25.488 "is_configured": true, 00:22:25.488 "data_offset": 2048, 00:22:25.488 "data_size": 63488 00:22:25.488 }, 00:22:25.488 { 00:22:25.488 "name": "BaseBdev4", 00:22:25.488 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:25.488 "is_configured": true, 00:22:25.488 "data_offset": 2048, 00:22:25.488 "data_size": 63488 00:22:25.488 } 00:22:25.488 ] 00:22:25.488 }' 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.488 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:25.746 [2024-05-15 03:16:56.832386] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:25.746 [2024-05-15 03:16:56.839865] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:25.746 [2024-05-15 03:16:56.839906] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.746 03:16:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.004 03:16:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.004 "name": "raid_bdev1", 00:22:26.004 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:26.004 "strip_size_kb": 0, 00:22:26.004 "state": "online", 00:22:26.004 "raid_level": "raid1", 00:22:26.004 "superblock": true, 00:22:26.004 "num_base_bdevs": 4, 00:22:26.004 "num_base_bdevs_discovered": 2, 00:22:26.004 "num_base_bdevs_operational": 2, 00:22:26.004 "base_bdevs_list": [ 00:22:26.004 { 00:22:26.004 "name": null, 00:22:26.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.004 "is_configured": false, 00:22:26.004 "data_offset": 2048, 00:22:26.004 "data_size": 63488 00:22:26.004 }, 00:22:26.004 { 00:22:26.004 "name": null, 00:22:26.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.004 "is_configured": false, 00:22:26.004 "data_offset": 2048, 00:22:26.004 "data_size": 63488 00:22:26.004 }, 00:22:26.004 { 00:22:26.004 "name": "BaseBdev3", 00:22:26.004 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:26.004 "is_configured": true, 00:22:26.004 "data_offset": 2048, 00:22:26.004 "data_size": 63488 00:22:26.004 }, 00:22:26.004 { 00:22:26.004 "name": "BaseBdev4", 00:22:26.004 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:26.004 "is_configured": true, 00:22:26.004 "data_offset": 2048, 00:22:26.004 "data_size": 63488 00:22:26.004 } 00:22:26.004 ] 00:22:26.004 }' 00:22:26.005 03:16:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.005 03:16:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.937 03:16:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:26.937 [2024-05-15 03:16:57.970874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:26.937 [2024-05-15 03:16:57.970920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.937 [2024-05-15 03:16:57.970939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264c8d0 00:22:26.937 [2024-05-15 03:16:57.970948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.937 [2024-05-15 03:16:57.971322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.937 [2024-05-15 03:16:57.971337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:26.937 [2024-05-15 03:16:57.971413] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:26.937 [2024-05-15 03:16:57.971423] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:26.937 [2024-05-15 03:16:57.971431] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:26.937 [2024-05-15 03:16:57.971446] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.937 [2024-05-15 03:16:57.975366] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264cb60 00:22:26.937 spare 00:22:26.937 [2024-05-15 03:16:57.976801] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.937 03:16:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.872 03:16:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.130 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.130 "name": "raid_bdev1", 00:22:28.130 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:28.130 "strip_size_kb": 0, 00:22:28.130 "state": "online", 00:22:28.130 "raid_level": "raid1", 00:22:28.130 "superblock": true, 00:22:28.130 "num_base_bdevs": 4, 00:22:28.130 "num_base_bdevs_discovered": 3, 00:22:28.130 "num_base_bdevs_operational": 3, 00:22:28.130 "process": { 00:22:28.130 "type": "rebuild", 00:22:28.130 "target": "spare", 00:22:28.130 "progress": { 00:22:28.131 "blocks": 24576, 00:22:28.131 "percent": 38 00:22:28.131 } 00:22:28.131 }, 00:22:28.131 "base_bdevs_list": [ 00:22:28.131 { 00:22:28.131 "name": "spare", 00:22:28.131 "uuid": "7201d569-8adc-5334-b606-a8edf17b7e44", 00:22:28.131 "is_configured": true, 00:22:28.131 "data_offset": 2048, 00:22:28.131 "data_size": 63488 00:22:28.131 }, 00:22:28.131 { 00:22:28.131 "name": null, 00:22:28.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.131 "is_configured": false, 00:22:28.131 "data_offset": 2048, 00:22:28.131 "data_size": 63488 00:22:28.131 }, 00:22:28.131 { 00:22:28.131 "name": "BaseBdev3", 00:22:28.131 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:28.131 "is_configured": true, 00:22:28.131 "data_offset": 2048, 00:22:28.131 "data_size": 63488 00:22:28.131 }, 00:22:28.131 { 00:22:28.131 "name": "BaseBdev4", 00:22:28.131 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:28.131 "is_configured": true, 00:22:28.131 "data_offset": 2048, 00:22:28.131 "data_size": 63488 00:22:28.131 } 00:22:28.131 ] 00:22:28.131 }' 00:22:28.131 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.387 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.388 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.388 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.388 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:28.646 [2024-05-15 03:16:59.577027] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.646 [2024-05-15 03:16:59.589012] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.646 [2024-05-15 03:16:59.589054] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.646 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.904 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.904 "name": "raid_bdev1", 00:22:28.904 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:28.904 "strip_size_kb": 0, 00:22:28.904 "state": "online", 00:22:28.904 "raid_level": "raid1", 00:22:28.904 "superblock": true, 00:22:28.904 "num_base_bdevs": 4, 00:22:28.904 "num_base_bdevs_discovered": 2, 00:22:28.904 "num_base_bdevs_operational": 2, 00:22:28.904 "base_bdevs_list": [ 00:22:28.904 { 00:22:28.904 "name": null, 00:22:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.904 "is_configured": false, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": null, 00:22:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.904 "is_configured": false, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": "BaseBdev3", 00:22:28.904 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": "BaseBdev4", 00:22:28.904 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 } 00:22:28.904 ] 00:22:28.904 }' 00:22:28.904 03:16:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.904 03:16:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.469 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.727 "name": "raid_bdev1", 00:22:29.727 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:29.727 "strip_size_kb": 0, 00:22:29.727 "state": "online", 00:22:29.727 "raid_level": "raid1", 00:22:29.727 "superblock": true, 00:22:29.727 "num_base_bdevs": 4, 00:22:29.727 "num_base_bdevs_discovered": 2, 00:22:29.727 "num_base_bdevs_operational": 2, 00:22:29.727 "base_bdevs_list": [ 00:22:29.727 { 00:22:29.727 "name": null, 00:22:29.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.727 "is_configured": false, 00:22:29.727 "data_offset": 2048, 00:22:29.727 "data_size": 63488 00:22:29.727 }, 00:22:29.727 { 00:22:29.727 "name": null, 00:22:29.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.727 "is_configured": false, 00:22:29.727 "data_offset": 2048, 00:22:29.727 "data_size": 63488 00:22:29.727 }, 00:22:29.727 { 00:22:29.727 "name": "BaseBdev3", 00:22:29.727 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:29.727 "is_configured": true, 00:22:29.727 "data_offset": 2048, 00:22:29.727 "data_size": 63488 00:22:29.727 }, 00:22:29.727 { 00:22:29.727 "name": "BaseBdev4", 00:22:29.727 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:29.727 "is_configured": true, 00:22:29.727 "data_offset": 2048, 00:22:29.727 "data_size": 63488 00:22:29.727 } 00:22:29.727 ] 00:22:29.727 }' 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:29.727 03:17:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:29.986 03:17:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:30.245 [2024-05-15 03:17:01.341735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:30.245 [2024-05-15 03:17:01.341776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.245 [2024-05-15 03:17:01.341792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275bc50 00:22:30.245 [2024-05-15 03:17:01.341801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.245 [2024-05-15 03:17:01.342142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.245 [2024-05-15 03:17:01.342158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:30.245 [2024-05-15 03:17:01.342216] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:30.245 [2024-05-15 03:17:01.342226] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:30.245 [2024-05-15 03:17:01.342234] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:30.245 BaseBdev1 00:22:30.245 03:17:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # sleep 1 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.650 "name": "raid_bdev1", 00:22:31.650 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:31.650 "strip_size_kb": 0, 00:22:31.650 "state": "online", 00:22:31.650 "raid_level": "raid1", 00:22:31.650 "superblock": true, 00:22:31.650 "num_base_bdevs": 4, 00:22:31.650 "num_base_bdevs_discovered": 2, 00:22:31.650 "num_base_bdevs_operational": 2, 00:22:31.650 "base_bdevs_list": [ 00:22:31.650 { 00:22:31.650 "name": null, 00:22:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.650 "is_configured": false, 00:22:31.650 "data_offset": 2048, 00:22:31.650 "data_size": 63488 00:22:31.650 }, 00:22:31.650 { 00:22:31.650 "name": null, 00:22:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.650 "is_configured": false, 00:22:31.650 "data_offset": 2048, 00:22:31.650 "data_size": 63488 00:22:31.650 }, 00:22:31.650 { 00:22:31.650 "name": "BaseBdev3", 00:22:31.650 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:31.650 "is_configured": true, 00:22:31.650 "data_offset": 2048, 00:22:31.650 "data_size": 63488 00:22:31.650 }, 00:22:31.650 { 00:22:31.650 "name": "BaseBdev4", 00:22:31.650 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:31.650 "is_configured": true, 00:22:31.650 "data_offset": 2048, 00:22:31.650 "data_size": 63488 00:22:31.650 } 00:22:31.650 ] 00:22:31.650 }' 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.650 03:17:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.218 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.476 "name": "raid_bdev1", 00:22:32.476 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:32.476 "strip_size_kb": 0, 00:22:32.476 "state": "online", 00:22:32.476 "raid_level": "raid1", 00:22:32.476 "superblock": true, 00:22:32.476 "num_base_bdevs": 4, 00:22:32.476 "num_base_bdevs_discovered": 2, 00:22:32.476 "num_base_bdevs_operational": 2, 00:22:32.476 "base_bdevs_list": [ 00:22:32.476 { 00:22:32.476 "name": null, 00:22:32.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.476 "is_configured": false, 00:22:32.476 "data_offset": 2048, 00:22:32.476 "data_size": 63488 00:22:32.476 }, 00:22:32.476 { 00:22:32.476 "name": null, 00:22:32.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.476 "is_configured": false, 00:22:32.476 "data_offset": 2048, 00:22:32.476 "data_size": 63488 00:22:32.476 }, 00:22:32.476 { 00:22:32.476 "name": "BaseBdev3", 00:22:32.476 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:32.476 "is_configured": true, 00:22:32.476 "data_offset": 2048, 00:22:32.476 "data_size": 63488 00:22:32.476 }, 00:22:32.476 { 00:22:32.476 "name": "BaseBdev4", 00:22:32.476 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:32.476 "is_configured": true, 00:22:32.476 "data_offset": 2048, 00:22:32.476 "data_size": 63488 00:22:32.476 } 00:22:32.476 ] 00:22:32.476 }' 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.476 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:32.477 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.735 [2024-05-15 03:17:03.856476] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.735 [2024-05-15 03:17:03.856594] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:32.735 [2024-05-15 03:17:03.856607] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:32.735 request: 00:22:32.735 { 00:22:32.735 "raid_bdev": "raid_bdev1", 00:22:32.735 "base_bdev": "BaseBdev1", 00:22:32.735 "method": "bdev_raid_add_base_bdev", 00:22:32.735 "req_id": 1 00:22:32.735 } 00:22:32.735 Got JSON-RPC error response 00:22:32.735 response: 00:22:32.735 { 00:22:32.735 "code": -22, 00:22:32.735 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:32.735 } 00:22:32.735 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:22:32.735 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.735 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.735 03:17:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.735 03:17:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # sleep 1 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.111 03:17:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.111 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.111 "name": "raid_bdev1", 00:22:34.111 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:34.111 "strip_size_kb": 0, 00:22:34.111 "state": "online", 00:22:34.111 "raid_level": "raid1", 00:22:34.111 "superblock": true, 00:22:34.111 "num_base_bdevs": 4, 00:22:34.111 "num_base_bdevs_discovered": 2, 00:22:34.111 "num_base_bdevs_operational": 2, 00:22:34.111 "base_bdevs_list": [ 00:22:34.111 { 00:22:34.111 "name": null, 00:22:34.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.111 "is_configured": false, 00:22:34.111 "data_offset": 2048, 00:22:34.111 "data_size": 63488 00:22:34.111 }, 00:22:34.111 { 00:22:34.111 "name": null, 00:22:34.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.111 "is_configured": false, 00:22:34.111 "data_offset": 2048, 00:22:34.111 "data_size": 63488 00:22:34.111 }, 00:22:34.111 { 00:22:34.111 "name": "BaseBdev3", 00:22:34.111 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:34.111 "is_configured": true, 00:22:34.111 "data_offset": 2048, 00:22:34.111 "data_size": 63488 00:22:34.111 }, 00:22:34.111 { 00:22:34.111 "name": "BaseBdev4", 00:22:34.111 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:34.111 "is_configured": true, 00:22:34.111 "data_offset": 2048, 00:22:34.111 "data_size": 63488 00:22:34.111 } 00:22:34.111 ] 00:22:34.111 }' 00:22:34.111 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.111 03:17:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.678 03:17:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.936 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.936 "name": "raid_bdev1", 00:22:34.936 "uuid": "ea929bb8-bf4b-4187-a659-77714eac1904", 00:22:34.936 "strip_size_kb": 0, 00:22:34.936 "state": "online", 00:22:34.936 "raid_level": "raid1", 00:22:34.936 "superblock": true, 00:22:34.936 "num_base_bdevs": 4, 00:22:34.936 "num_base_bdevs_discovered": 2, 00:22:34.936 "num_base_bdevs_operational": 2, 00:22:34.936 "base_bdevs_list": [ 00:22:34.936 { 00:22:34.936 "name": null, 00:22:34.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.936 "is_configured": false, 00:22:34.936 "data_offset": 2048, 00:22:34.936 "data_size": 63488 00:22:34.936 }, 00:22:34.936 { 00:22:34.936 "name": null, 00:22:34.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.936 "is_configured": false, 00:22:34.936 "data_offset": 2048, 00:22:34.936 "data_size": 63488 00:22:34.936 }, 00:22:34.936 { 00:22:34.936 "name": "BaseBdev3", 00:22:34.936 "uuid": "2f3d2805-5929-58a6-af00-fdfd2642bf35", 00:22:34.936 "is_configured": true, 00:22:34.936 "data_offset": 2048, 00:22:34.936 "data_size": 63488 00:22:34.936 }, 00:22:34.936 { 00:22:34.936 "name": "BaseBdev4", 00:22:34.936 "uuid": "8ecbe75f-c8eb-52c9-a7a1-0fb3643c3397", 00:22:34.936 "is_configured": true, 00:22:34.936 "data_offset": 2048, 00:22:34.936 "data_size": 63488 00:22:34.936 } 00:22:34.937 ] 00:22:34.937 }' 00:22:34.937 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.937 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.937 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # killprocess 4174802 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 4174802 ']' 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 4174802 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4174802 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4174802' 00:22:35.196 killing process with pid 4174802 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 4174802 00:22:35.196 Received shutdown signal, test time was about 60.000000 seconds 00:22:35.196 00:22:35.196 Latency(us) 00:22:35.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.196 =================================================================================================================== 00:22:35.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:35.196 [2024-05-15 03:17:06.150113] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:35.196 [2024-05-15 03:17:06.150199] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.196 [2024-05-15 03:17:06.150258] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.196 [2024-05-15 03:17:06.150268] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b55a0 name raid_bdev1, state offline 00:22:35.196 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 4174802 00:22:35.196 [2024-05-15 03:17:06.192283] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@797 -- # return 0 00:22:35.454 00:22:35.454 real 0m39.355s 00:22:35.454 user 0m59.521s 00:22:35.454 sys 0m5.467s 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.454 ************************************ 00:22:35.454 END TEST raid_rebuild_test_sb 00:22:35.454 ************************************ 00:22:35.454 03:17:06 bdev_raid -- bdev/bdev_raid.sh@825 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:22:35.454 03:17:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:35.454 03:17:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:35.454 03:17:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.454 ************************************ 00:22:35.454 START TEST raid_rebuild_test_io 00:22:35.454 ************************************ 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false true true 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=4 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local superblock=false 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local background_io=true 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local verify=true 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev3 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev4 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # local strip_size 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@582 -- # local create_arg 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local data_offset 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # raid_pid=4181616 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # waitforlisten 4181616 /var/tmp/spdk-raid.sock 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 4181616 ']' 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.454 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.455 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.455 03:17:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:35.455 [2024-05-15 03:17:06.552025] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:22:35.455 [2024-05-15 03:17:06.552078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181616 ] 00:22:35.455 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:35.455 Zero copy mechanism will not be used. 00:22:35.713 [2024-05-15 03:17:06.647990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.713 [2024-05-15 03:17:06.742461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.713 [2024-05-15 03:17:06.802791] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.713 [2024-05-15 03:17:06.802847] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.648 03:17:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.648 03:17:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:22:36.648 03:17:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:36.648 03:17:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:36.648 BaseBdev1_malloc 00:22:36.648 03:17:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:36.906 [2024-05-15 03:17:07.992472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:36.906 [2024-05-15 03:17:07.992518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.906 [2024-05-15 03:17:07.992540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c2b00 00:22:36.906 [2024-05-15 03:17:07.992550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.906 [2024-05-15 03:17:07.994269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.906 [2024-05-15 03:17:07.994295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:36.906 BaseBdev1 00:22:36.906 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:36.906 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:37.199 BaseBdev2_malloc 00:22:37.199 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:37.458 [2024-05-15 03:17:08.506418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:37.458 [2024-05-15 03:17:08.506459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.458 [2024-05-15 03:17:08.506475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2468860 00:22:37.458 [2024-05-15 03:17:08.506484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.458 [2024-05-15 03:17:08.508033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.458 [2024-05-15 03:17:08.508059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:37.458 BaseBdev2 00:22:37.458 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:37.458 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:37.716 BaseBdev3_malloc 00:22:37.716 03:17:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:37.974 [2024-05-15 03:17:09.016271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:37.974 [2024-05-15 03:17:09.016314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.974 [2024-05-15 03:17:09.016331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246a080 00:22:37.974 [2024-05-15 03:17:09.016341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.974 [2024-05-15 03:17:09.017899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.974 [2024-05-15 03:17:09.017925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:37.974 BaseBdev3 00:22:37.974 03:17:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:37.974 03:17:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:38.233 BaseBdev4_malloc 00:22:38.233 03:17:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:38.492 [2024-05-15 03:17:09.522095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:38.492 [2024-05-15 03:17:09.522135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.492 [2024-05-15 03:17:09.522152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2468f20 00:22:38.492 [2024-05-15 03:17:09.522161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.492 [2024-05-15 03:17:09.523693] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.492 [2024-05-15 03:17:09.523718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:38.492 BaseBdev4 00:22:38.492 03:17:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:38.751 spare_malloc 00:22:38.751 03:17:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:39.009 spare_delay 00:22:39.009 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:39.268 [2024-05-15 03:17:10.276735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:39.268 [2024-05-15 03:17:10.276776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.268 [2024-05-15 03:17:10.276799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22bbe10 00:22:39.268 [2024-05-15 03:17:10.276808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.268 [2024-05-15 03:17:10.278423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.268 [2024-05-15 03:17:10.278449] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.268 spare 00:22:39.268 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:39.526 [2024-05-15 03:17:10.541457] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.526 [2024-05-15 03:17:10.542803] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.526 [2024-05-15 03:17:10.542867] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:39.526 [2024-05-15 03:17:10.542915] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:39.526 [2024-05-15 03:17:10.542994] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x22bc6b0 00:22:39.526 [2024-05-15 03:17:10.543002] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:39.526 [2024-05-15 03:17:10.543227] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22c0be0 00:22:39.526 [2024-05-15 03:17:10.543382] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22bc6b0 00:22:39.526 [2024-05-15 03:17:10.543390] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22bc6b0 00:22:39.526 [2024-05-15 03:17:10.543506] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.526 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.785 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.785 "name": "raid_bdev1", 00:22:39.785 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:39.785 "strip_size_kb": 0, 00:22:39.785 "state": "online", 00:22:39.785 "raid_level": "raid1", 00:22:39.785 "superblock": false, 00:22:39.785 "num_base_bdevs": 4, 00:22:39.785 "num_base_bdevs_discovered": 4, 00:22:39.785 "num_base_bdevs_operational": 4, 00:22:39.785 "base_bdevs_list": [ 00:22:39.785 { 00:22:39.785 "name": "BaseBdev1", 00:22:39.785 "uuid": "9e867113-2874-5f8f-8515-bb9c843bd37d", 00:22:39.785 "is_configured": true, 00:22:39.785 "data_offset": 0, 00:22:39.785 "data_size": 65536 00:22:39.785 }, 00:22:39.785 { 00:22:39.785 "name": "BaseBdev2", 00:22:39.785 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:39.785 "is_configured": true, 00:22:39.785 "data_offset": 0, 00:22:39.785 "data_size": 65536 00:22:39.785 }, 00:22:39.785 { 00:22:39.785 "name": "BaseBdev3", 00:22:39.785 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:39.785 "is_configured": true, 00:22:39.785 "data_offset": 0, 00:22:39.785 "data_size": 65536 00:22:39.785 }, 00:22:39.785 { 00:22:39.785 "name": "BaseBdev4", 00:22:39.785 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:39.785 "is_configured": true, 00:22:39.785 "data_offset": 0, 00:22:39.785 "data_size": 65536 00:22:39.785 } 00:22:39.785 ] 00:22:39.785 }' 00:22:39.785 03:17:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.785 03:17:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:40.353 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:40.353 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:22:40.611 [2024-05-15 03:17:11.648689] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.611 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=65536 00:22:40.611 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.611 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:40.870 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # data_offset=0 00:22:40.870 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@626 -- # '[' true = true ']' 00:22:40.870 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:40.870 03:17:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:41.129 [2024-05-15 03:17:12.043483] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22bd780 00:22:41.129 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:41.129 Zero copy mechanism will not be used. 00:22:41.129 Running I/O for 60 seconds... 00:22:41.129 [2024-05-15 03:17:12.171759] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.129 [2024-05-15 03:17:12.171936] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x22bd780 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.129 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.130 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.130 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.130 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.130 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.388 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.388 "name": "raid_bdev1", 00:22:41.388 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:41.388 "strip_size_kb": 0, 00:22:41.388 "state": "online", 00:22:41.388 "raid_level": "raid1", 00:22:41.388 "superblock": false, 00:22:41.388 "num_base_bdevs": 4, 00:22:41.388 "num_base_bdevs_discovered": 3, 00:22:41.388 "num_base_bdevs_operational": 3, 00:22:41.388 "base_bdevs_list": [ 00:22:41.388 { 00:22:41.388 "name": null, 00:22:41.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.388 "is_configured": false, 00:22:41.388 "data_offset": 0, 00:22:41.388 "data_size": 65536 00:22:41.388 }, 00:22:41.388 { 00:22:41.388 "name": "BaseBdev2", 00:22:41.388 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:41.388 "is_configured": true, 00:22:41.388 "data_offset": 0, 00:22:41.388 "data_size": 65536 00:22:41.388 }, 00:22:41.388 { 00:22:41.388 "name": "BaseBdev3", 00:22:41.388 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:41.388 "is_configured": true, 00:22:41.388 "data_offset": 0, 00:22:41.388 "data_size": 65536 00:22:41.388 }, 00:22:41.388 { 00:22:41.388 "name": "BaseBdev4", 00:22:41.388 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:41.388 "is_configured": true, 00:22:41.388 "data_offset": 0, 00:22:41.388 "data_size": 65536 00:22:41.388 } 00:22:41.388 ] 00:22:41.388 }' 00:22:41.388 03:17:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.388 03:17:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:41.955 03:17:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.213 [2024-05-15 03:17:13.338042] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.473 03:17:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # sleep 1 00:22:42.473 [2024-05-15 03:17:13.396508] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fcb1e0 00:22:42.473 [2024-05-15 03:17:13.398732] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.473 [2024-05-15 03:17:13.521207] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:42.473 [2024-05-15 03:17:13.521522] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:42.733 [2024-05-15 03:17:13.747018] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:42.733 [2024-05-15 03:17:13.747618] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:43.299 [2024-05-15 03:17:14.199210] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.299 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.299 [2024-05-15 03:17:14.410960] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:43.299 [2024-05-15 03:17:14.411117] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:43.558 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.558 "name": "raid_bdev1", 00:22:43.558 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:43.558 "strip_size_kb": 0, 00:22:43.558 "state": "online", 00:22:43.558 "raid_level": "raid1", 00:22:43.558 "superblock": false, 00:22:43.558 "num_base_bdevs": 4, 00:22:43.558 "num_base_bdevs_discovered": 4, 00:22:43.558 "num_base_bdevs_operational": 4, 00:22:43.558 "process": { 00:22:43.558 "type": "rebuild", 00:22:43.558 "target": "spare", 00:22:43.558 "progress": { 00:22:43.558 "blocks": 10240, 00:22:43.558 "percent": 15 00:22:43.558 } 00:22:43.558 }, 00:22:43.558 "base_bdevs_list": [ 00:22:43.558 { 00:22:43.558 "name": "spare", 00:22:43.558 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:43.558 "is_configured": true, 00:22:43.558 "data_offset": 0, 00:22:43.558 "data_size": 65536 00:22:43.558 }, 00:22:43.558 { 00:22:43.558 "name": "BaseBdev2", 00:22:43.558 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:43.558 "is_configured": true, 00:22:43.558 "data_offset": 0, 00:22:43.558 "data_size": 65536 00:22:43.558 }, 00:22:43.558 { 00:22:43.558 "name": "BaseBdev3", 00:22:43.558 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:43.558 "is_configured": true, 00:22:43.558 "data_offset": 0, 00:22:43.558 "data_size": 65536 00:22:43.558 }, 00:22:43.558 { 00:22:43.558 "name": "BaseBdev4", 00:22:43.558 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:43.558 "is_configured": true, 00:22:43.558 "data_offset": 0, 00:22:43.558 "data_size": 65536 00:22:43.558 } 00:22:43.558 ] 00:22:43.558 }' 00:22:43.558 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.558 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.558 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.816 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.816 03:17:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:43.816 [2024-05-15 03:17:14.771216] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:43.816 [2024-05-15 03:17:14.903647] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:43.816 [2024-05-15 03:17:14.904242] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:43.816 [2024-05-15 03:17:14.967185] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.074 [2024-05-15 03:17:15.160414] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:44.074 [2024-05-15 03:17:15.182550] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.074 [2024-05-15 03:17:15.208149] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x22bd780 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.333 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.592 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.592 "name": "raid_bdev1", 00:22:44.592 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:44.592 "strip_size_kb": 0, 00:22:44.592 "state": "online", 00:22:44.592 "raid_level": "raid1", 00:22:44.592 "superblock": false, 00:22:44.592 "num_base_bdevs": 4, 00:22:44.592 "num_base_bdevs_discovered": 3, 00:22:44.592 "num_base_bdevs_operational": 3, 00:22:44.592 "base_bdevs_list": [ 00:22:44.592 { 00:22:44.592 "name": null, 00:22:44.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.592 "is_configured": false, 00:22:44.592 "data_offset": 0, 00:22:44.592 "data_size": 65536 00:22:44.592 }, 00:22:44.592 { 00:22:44.592 "name": "BaseBdev2", 00:22:44.592 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:44.592 "is_configured": true, 00:22:44.592 "data_offset": 0, 00:22:44.592 "data_size": 65536 00:22:44.592 }, 00:22:44.592 { 00:22:44.592 "name": "BaseBdev3", 00:22:44.592 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:44.592 "is_configured": true, 00:22:44.592 "data_offset": 0, 00:22:44.592 "data_size": 65536 00:22:44.592 }, 00:22:44.592 { 00:22:44.592 "name": "BaseBdev4", 00:22:44.592 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:44.592 "is_configured": true, 00:22:44.592 "data_offset": 0, 00:22:44.592 "data_size": 65536 00:22:44.592 } 00:22:44.592 ] 00:22:44.592 }' 00:22:44.592 03:17:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.592 03:17:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.158 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.430 "name": "raid_bdev1", 00:22:45.430 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:45.430 "strip_size_kb": 0, 00:22:45.430 "state": "online", 00:22:45.430 "raid_level": "raid1", 00:22:45.430 "superblock": false, 00:22:45.430 "num_base_bdevs": 4, 00:22:45.430 "num_base_bdevs_discovered": 3, 00:22:45.430 "num_base_bdevs_operational": 3, 00:22:45.430 "base_bdevs_list": [ 00:22:45.430 { 00:22:45.430 "name": null, 00:22:45.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.430 "is_configured": false, 00:22:45.430 "data_offset": 0, 00:22:45.430 "data_size": 65536 00:22:45.430 }, 00:22:45.430 { 00:22:45.430 "name": "BaseBdev2", 00:22:45.430 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:45.430 "is_configured": true, 00:22:45.430 "data_offset": 0, 00:22:45.430 "data_size": 65536 00:22:45.430 }, 00:22:45.430 { 00:22:45.430 "name": "BaseBdev3", 00:22:45.430 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:45.430 "is_configured": true, 00:22:45.430 "data_offset": 0, 00:22:45.430 "data_size": 65536 00:22:45.430 }, 00:22:45.430 { 00:22:45.430 "name": "BaseBdev4", 00:22:45.430 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:45.430 "is_configured": true, 00:22:45.430 "data_offset": 0, 00:22:45.430 "data_size": 65536 00:22:45.430 } 00:22:45.430 ] 00:22:45.430 }' 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:45.430 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.736 [2024-05-15 03:17:16.797832] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.736 03:17:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # sleep 1 00:22:45.736 [2024-05-15 03:17:16.864880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2474d30 00:22:45.736 [2024-05-15 03:17:16.866448] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.994 [2024-05-15 03:17:17.002366] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:45.994 [2024-05-15 03:17:17.011572] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:46.252 [2024-05-15 03:17:17.234111] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:46.252 [2024-05-15 03:17:17.234685] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:46.819 [2024-05-15 03:17:17.716338] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.819 03:17:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.819 [2024-05-15 03:17:17.942566] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:46.819 [2024-05-15 03:17:17.945021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.077 "name": "raid_bdev1", 00:22:47.077 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:47.077 "strip_size_kb": 0, 00:22:47.077 "state": "online", 00:22:47.077 "raid_level": "raid1", 00:22:47.077 "superblock": false, 00:22:47.077 "num_base_bdevs": 4, 00:22:47.077 "num_base_bdevs_discovered": 4, 00:22:47.077 "num_base_bdevs_operational": 4, 00:22:47.077 "process": { 00:22:47.077 "type": "rebuild", 00:22:47.077 "target": "spare", 00:22:47.077 "progress": { 00:22:47.077 "blocks": 14336, 00:22:47.077 "percent": 21 00:22:47.077 } 00:22:47.077 }, 00:22:47.077 "base_bdevs_list": [ 00:22:47.077 { 00:22:47.077 "name": "spare", 00:22:47.077 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:47.077 "is_configured": true, 00:22:47.077 "data_offset": 0, 00:22:47.077 "data_size": 65536 00:22:47.077 }, 00:22:47.077 { 00:22:47.077 "name": "BaseBdev2", 00:22:47.077 "uuid": "e891e71f-8a6e-510d-8680-1f1323d41055", 00:22:47.077 "is_configured": true, 00:22:47.077 "data_offset": 0, 00:22:47.077 "data_size": 65536 00:22:47.077 }, 00:22:47.077 { 00:22:47.077 "name": "BaseBdev3", 00:22:47.077 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:47.077 "is_configured": true, 00:22:47.077 "data_offset": 0, 00:22:47.077 "data_size": 65536 00:22:47.077 }, 00:22:47.077 { 00:22:47.077 "name": "BaseBdev4", 00:22:47.077 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:47.077 "is_configured": true, 00:22:47.077 "data_offset": 0, 00:22:47.077 "data_size": 65536 00:22:47.077 } 00:22:47.077 ] 00:22:47.077 }' 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.077 [2024-05-15 03:17:18.177266] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:47.077 [2024-05-15 03:17:18.177912] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.077 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # '[' false = true ']' 00:22:47.078 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=4 00:22:47.078 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:22:47.078 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # '[' 4 -gt 2 ']' 00:22:47.078 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@700 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:47.336 [2024-05-15 03:17:18.447608] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:47.594 [2024-05-15 03:17:18.569638] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:47.595 [2024-05-15 03:17:18.671916] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x22bd780 00:22:47.595 [2024-05-15 03:17:18.671940] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2474d30 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@703 -- # base_bdevs[1]= 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@704 -- # (( num_base_bdevs_operational-- )) 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.595 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.854 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.854 "name": "raid_bdev1", 00:22:47.854 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:47.854 "strip_size_kb": 0, 00:22:47.854 "state": "online", 00:22:47.854 "raid_level": "raid1", 00:22:47.854 "superblock": false, 00:22:47.854 "num_base_bdevs": 4, 00:22:47.854 "num_base_bdevs_discovered": 3, 00:22:47.854 "num_base_bdevs_operational": 3, 00:22:47.854 "process": { 00:22:47.854 "type": "rebuild", 00:22:47.854 "target": "spare", 00:22:47.854 "progress": { 00:22:47.854 "blocks": 22528, 00:22:47.854 "percent": 34 00:22:47.854 } 00:22:47.854 }, 00:22:47.854 "base_bdevs_list": [ 00:22:47.854 { 00:22:47.854 "name": "spare", 00:22:47.854 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:47.854 "is_configured": true, 00:22:47.854 "data_offset": 0, 00:22:47.854 "data_size": 65536 00:22:47.854 }, 00:22:47.854 { 00:22:47.854 "name": null, 00:22:47.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.854 "is_configured": false, 00:22:47.854 "data_offset": 0, 00:22:47.854 "data_size": 65536 00:22:47.854 }, 00:22:47.854 { 00:22:47.854 "name": "BaseBdev3", 00:22:47.854 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:47.854 "is_configured": true, 00:22:47.854 "data_offset": 0, 00:22:47.854 "data_size": 65536 00:22:47.854 }, 00:22:47.854 { 00:22:47.854 "name": "BaseBdev4", 00:22:47.854 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:47.854 "is_configured": true, 00:22:47.854 "data_offset": 0, 00:22:47.854 "data_size": 65536 00:22:47.854 } 00:22:47.854 ] 00:22:47.854 }' 00:22:47.854 03:17:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.854 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.854 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # local timeout=816 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.113 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.113 [2024-05-15 03:17:19.178751] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:48.113 [2024-05-15 03:17:19.179179] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.372 "name": "raid_bdev1", 00:22:48.372 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:48.372 "strip_size_kb": 0, 00:22:48.372 "state": "online", 00:22:48.372 "raid_level": "raid1", 00:22:48.372 "superblock": false, 00:22:48.372 "num_base_bdevs": 4, 00:22:48.372 "num_base_bdevs_discovered": 3, 00:22:48.372 "num_base_bdevs_operational": 3, 00:22:48.372 "process": { 00:22:48.372 "type": "rebuild", 00:22:48.372 "target": "spare", 00:22:48.372 "progress": { 00:22:48.372 "blocks": 28672, 00:22:48.372 "percent": 43 00:22:48.372 } 00:22:48.372 }, 00:22:48.372 "base_bdevs_list": [ 00:22:48.372 { 00:22:48.372 "name": "spare", 00:22:48.372 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:48.372 "is_configured": true, 00:22:48.372 "data_offset": 0, 00:22:48.372 "data_size": 65536 00:22:48.372 }, 00:22:48.372 { 00:22:48.372 "name": null, 00:22:48.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.372 "is_configured": false, 00:22:48.372 "data_offset": 0, 00:22:48.372 "data_size": 65536 00:22:48.372 }, 00:22:48.372 { 00:22:48.372 "name": "BaseBdev3", 00:22:48.372 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:48.372 "is_configured": true, 00:22:48.372 "data_offset": 0, 00:22:48.372 "data_size": 65536 00:22:48.372 }, 00:22:48.372 { 00:22:48.372 "name": "BaseBdev4", 00:22:48.372 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:48.372 "is_configured": true, 00:22:48.372 "data_offset": 0, 00:22:48.372 "data_size": 65536 00:22:48.372 } 00:22:48.372 ] 00:22:48.372 }' 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.372 03:17:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:22:49.307 [2024-05-15 03:17:20.285855] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.307 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.566 [2024-05-15 03:17:20.541089] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:49.566 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.566 "name": "raid_bdev1", 00:22:49.566 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:49.566 "strip_size_kb": 0, 00:22:49.566 "state": "online", 00:22:49.566 "raid_level": "raid1", 00:22:49.566 "superblock": false, 00:22:49.566 "num_base_bdevs": 4, 00:22:49.566 "num_base_bdevs_discovered": 3, 00:22:49.566 "num_base_bdevs_operational": 3, 00:22:49.566 "process": { 00:22:49.566 "type": "rebuild", 00:22:49.566 "target": "spare", 00:22:49.566 "progress": { 00:22:49.566 "blocks": 51200, 00:22:49.566 "percent": 78 00:22:49.566 } 00:22:49.566 }, 00:22:49.566 "base_bdevs_list": [ 00:22:49.566 { 00:22:49.566 "name": "spare", 00:22:49.566 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:49.566 "is_configured": true, 00:22:49.566 "data_offset": 0, 00:22:49.566 "data_size": 65536 00:22:49.566 }, 00:22:49.566 { 00:22:49.566 "name": null, 00:22:49.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.566 "is_configured": false, 00:22:49.566 "data_offset": 0, 00:22:49.566 "data_size": 65536 00:22:49.566 }, 00:22:49.566 { 00:22:49.566 "name": "BaseBdev3", 00:22:49.566 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:49.566 "is_configured": true, 00:22:49.566 "data_offset": 0, 00:22:49.566 "data_size": 65536 00:22:49.566 }, 00:22:49.566 { 00:22:49.566 "name": "BaseBdev4", 00:22:49.566 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:49.566 "is_configured": true, 00:22:49.566 "data_offset": 0, 00:22:49.566 "data_size": 65536 00:22:49.566 } 00:22:49.566 ] 00:22:49.566 }' 00:22:49.566 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.824 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.824 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.824 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.824 03:17:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:22:50.082 [2024-05-15 03:17:21.007231] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:50.082 [2024-05-15 03:17:21.231067] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:50.649 [2024-05-15 03:17:21.596436] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:50.649 [2024-05-15 03:17:21.705455] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:50.649 [2024-05-15 03:17:21.707602] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.649 03:17:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.908 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:50.908 "name": "raid_bdev1", 00:22:50.908 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:50.908 "strip_size_kb": 0, 00:22:50.908 "state": "online", 00:22:50.908 "raid_level": "raid1", 00:22:50.908 "superblock": false, 00:22:50.908 "num_base_bdevs": 4, 00:22:50.908 "num_base_bdevs_discovered": 3, 00:22:50.908 "num_base_bdevs_operational": 3, 00:22:50.908 "base_bdevs_list": [ 00:22:50.908 { 00:22:50.908 "name": "spare", 00:22:50.908 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:50.908 "is_configured": true, 00:22:50.908 "data_offset": 0, 00:22:50.908 "data_size": 65536 00:22:50.908 }, 00:22:50.908 { 00:22:50.908 "name": null, 00:22:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.908 "is_configured": false, 00:22:50.908 "data_offset": 0, 00:22:50.908 "data_size": 65536 00:22:50.908 }, 00:22:50.908 { 00:22:50.908 "name": "BaseBdev3", 00:22:50.908 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:50.908 "is_configured": true, 00:22:50.908 "data_offset": 0, 00:22:50.908 "data_size": 65536 00:22:50.908 }, 00:22:50.908 { 00:22:50.908 "name": "BaseBdev4", 00:22:50.908 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:50.908 "is_configured": true, 00:22:50.908 "data_offset": 0, 00:22:50.908 "data_size": 65536 00:22:50.908 } 00:22:50.908 ] 00:22:50.908 }' 00:22:50.908 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # break 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.166 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.424 "name": "raid_bdev1", 00:22:51.424 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:51.424 "strip_size_kb": 0, 00:22:51.424 "state": "online", 00:22:51.424 "raid_level": "raid1", 00:22:51.424 "superblock": false, 00:22:51.424 "num_base_bdevs": 4, 00:22:51.424 "num_base_bdevs_discovered": 3, 00:22:51.424 "num_base_bdevs_operational": 3, 00:22:51.424 "base_bdevs_list": [ 00:22:51.424 { 00:22:51.424 "name": "spare", 00:22:51.424 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:51.424 "is_configured": true, 00:22:51.424 "data_offset": 0, 00:22:51.424 "data_size": 65536 00:22:51.424 }, 00:22:51.424 { 00:22:51.424 "name": null, 00:22:51.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.424 "is_configured": false, 00:22:51.424 "data_offset": 0, 00:22:51.424 "data_size": 65536 00:22:51.424 }, 00:22:51.424 { 00:22:51.424 "name": "BaseBdev3", 00:22:51.424 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:51.424 "is_configured": true, 00:22:51.424 "data_offset": 0, 00:22:51.424 "data_size": 65536 00:22:51.424 }, 00:22:51.424 { 00:22:51.424 "name": "BaseBdev4", 00:22:51.424 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:51.424 "is_configured": true, 00:22:51.424 "data_offset": 0, 00:22:51.424 "data_size": 65536 00:22:51.424 } 00:22:51.424 ] 00:22:51.424 }' 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.424 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.682 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.682 "name": "raid_bdev1", 00:22:51.682 "uuid": "49580616-ee3d-42d6-8347-fb44269a1a8d", 00:22:51.682 "strip_size_kb": 0, 00:22:51.682 "state": "online", 00:22:51.682 "raid_level": "raid1", 00:22:51.682 "superblock": false, 00:22:51.682 "num_base_bdevs": 4, 00:22:51.682 "num_base_bdevs_discovered": 3, 00:22:51.682 "num_base_bdevs_operational": 3, 00:22:51.682 "base_bdevs_list": [ 00:22:51.682 { 00:22:51.682 "name": "spare", 00:22:51.682 "uuid": "320c9b12-ba3a-5711-a39a-99a3f297d0bc", 00:22:51.682 "is_configured": true, 00:22:51.682 "data_offset": 0, 00:22:51.682 "data_size": 65536 00:22:51.682 }, 00:22:51.682 { 00:22:51.682 "name": null, 00:22:51.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.682 "is_configured": false, 00:22:51.682 "data_offset": 0, 00:22:51.682 "data_size": 65536 00:22:51.682 }, 00:22:51.682 { 00:22:51.682 "name": "BaseBdev3", 00:22:51.682 "uuid": "3be9874c-b183-5fe0-acd4-64f984072926", 00:22:51.682 "is_configured": true, 00:22:51.682 "data_offset": 0, 00:22:51.682 "data_size": 65536 00:22:51.682 }, 00:22:51.682 { 00:22:51.682 "name": "BaseBdev4", 00:22:51.682 "uuid": "6d154012-4c41-5083-99ad-2e7a296e0d0d", 00:22:51.682 "is_configured": true, 00:22:51.682 "data_offset": 0, 00:22:51.682 "data_size": 65536 00:22:51.682 } 00:22:51.682 ] 00:22:51.682 }' 00:22:51.682 03:17:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.682 03:17:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:52.247 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:52.511 [2024-05-15 03:17:23.624228] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:52.511 [2024-05-15 03:17:23.624260] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.771 00:22:52.771 Latency(us) 00:22:52.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.771 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:52.771 raid_bdev1 : 11.63 93.09 279.26 0.00 0.00 14638.02 304.27 122333.87 00:22:52.771 =================================================================================================================== 00:22:52.771 Total : 93.09 279.26 0.00 0.00 14638.02 304.27 122333.87 00:22:52.771 [2024-05-15 03:17:23.712657] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.771 [2024-05-15 03:17:23.712686] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.771 [2024-05-15 03:17:23.712783] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:52.771 [2024-05-15 03:17:23.712792] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22bc6b0 name raid_bdev1, state offline 00:22:52.771 0 00:22:52.771 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.771 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # jq length 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # '[' true = true ']' 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:53.030 03:17:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:53.289 /dev/nbd0 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.289 1+0 records in 00:22:53.289 1+0 records out 00:22:53.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229277 s, 17.9 MB/s 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # '[' -z '' ']' 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # continue 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev3 ']' 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:53.289 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:53.548 /dev/nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.548 1+0 records in 00:22:53.548 1+0 records out 00:22:53.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242471 s, 16.9 MB/s 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@736 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:53.548 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev4 ']' 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:53.808 03:17:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:54.067 /dev/nbd1 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.067 1+0 records in 00:22:54.067 1+0 records out 00:22:54.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202047 s, 20.3 MB/s 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:54.067 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@736 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.068 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.327 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@748 -- # '[' false = true ']' 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@795 -- # killprocess 4181616 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 4181616 ']' 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 4181616 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4181616 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4181616' 00:22:54.585 killing process with pid 4181616 00:22:54.585 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 4181616 00:22:54.586 Received shutdown signal, test time was about 13.638741 seconds 00:22:54.586 00:22:54.586 Latency(us) 00:22:54.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.586 =================================================================================================================== 00:22:54.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.586 [2024-05-15 03:17:25.717763] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.586 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 4181616 00:22:54.843 [2024-05-15 03:17:25.756496] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.843 03:17:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@797 -- # return 0 00:22:54.843 00:22:54.843 real 0m19.500s 00:22:54.843 user 0m31.046s 00:22:54.843 sys 0m2.733s 00:22:54.843 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:54.843 03:17:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:54.843 ************************************ 00:22:54.843 END TEST raid_rebuild_test_io 00:22:54.843 ************************************ 00:22:55.103 03:17:26 bdev_raid -- bdev/bdev_raid.sh@826 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:22:55.103 03:17:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:55.103 03:17:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:55.103 03:17:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.103 ************************************ 00:22:55.103 START TEST raid_rebuild_test_sb_io 00:22:55.103 ************************************ 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true true true 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=4 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local background_io=true 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local verify=true 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev3 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # echo BaseBdev4 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # local strip_size 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@582 -- # local create_arg 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local data_offset 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # raid_pid=4185034 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # waitforlisten 4185034 /var/tmp/spdk-raid.sock 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 4185034 ']' 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:55.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.103 03:17:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:55.103 [2024-05-15 03:17:26.127985] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:22:55.103 [2024-05-15 03:17:26.128041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185034 ] 00:22:55.103 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:55.103 Zero copy mechanism will not be used. 00:22:55.103 [2024-05-15 03:17:26.224057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.362 [2024-05-15 03:17:26.318394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.362 [2024-05-15 03:17:26.379441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.362 [2024-05-15 03:17:26.379484] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.928 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.928 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:22:55.928 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:55.928 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:56.187 BaseBdev1_malloc 00:22:56.187 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:56.446 [2024-05-15 03:17:27.568892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:56.446 [2024-05-15 03:17:27.568936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.446 [2024-05-15 03:17:27.568957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a5b00 00:22:56.446 [2024-05-15 03:17:27.568966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.446 [2024-05-15 03:17:27.570681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.446 [2024-05-15 03:17:27.570709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:56.446 BaseBdev1 00:22:56.446 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:56.446 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:56.705 BaseBdev2_malloc 00:22:56.706 03:17:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:56.965 [2024-05-15 03:17:28.062847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:56.965 [2024-05-15 03:17:28.062896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.965 [2024-05-15 03:17:28.062911] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4b860 00:22:56.965 [2024-05-15 03:17:28.062921] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.965 [2024-05-15 03:17:28.064442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.965 [2024-05-15 03:17:28.064468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:56.965 BaseBdev2 00:22:56.965 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:56.965 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:57.225 BaseBdev3_malloc 00:22:57.225 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:57.483 [2024-05-15 03:17:28.576842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:57.483 [2024-05-15 03:17:28.576892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.483 [2024-05-15 03:17:28.576909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4d080 00:22:57.483 [2024-05-15 03:17:28.576918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.483 [2024-05-15 03:17:28.578475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.483 [2024-05-15 03:17:28.578501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:57.483 BaseBdev3 00:22:57.483 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:22:57.483 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:57.742 BaseBdev4_malloc 00:22:57.742 03:17:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:58.000 [2024-05-15 03:17:29.082652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:58.000 [2024-05-15 03:17:29.082694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.000 [2024-05-15 03:17:29.082712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4bf20 00:22:58.000 [2024-05-15 03:17:29.082722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.000 [2024-05-15 03:17:29.084280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.000 [2024-05-15 03:17:29.084307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:58.000 BaseBdev4 00:22:58.000 03:17:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:58.259 spare_malloc 00:22:58.259 03:17:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:58.518 spare_delay 00:22:58.518 03:17:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:58.777 [2024-05-15 03:17:29.849125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:58.777 [2024-05-15 03:17:29.849167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.777 [2024-05-15 03:17:29.849186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x199ee10 00:22:58.777 [2024-05-15 03:17:29.849195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.777 [2024-05-15 03:17:29.850791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.777 [2024-05-15 03:17:29.850816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:58.777 spare 00:22:58.777 03:17:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:59.035 [2024-05-15 03:17:30.109860] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.035 [2024-05-15 03:17:30.111296] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.035 [2024-05-15 03:17:30.111356] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:59.035 [2024-05-15 03:17:30.111405] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:59.035 [2024-05-15 03:17:30.111602] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x199f6b0 00:22:59.035 [2024-05-15 03:17:30.111613] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:59.035 [2024-05-15 03:17:30.111827] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19a0bb0 00:22:59.035 [2024-05-15 03:17:30.111999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x199f6b0 00:22:59.035 [2024-05-15 03:17:30.112008] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x199f6b0 00:22:59.035 [2024-05-15 03:17:30.112112] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.035 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.036 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.036 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.036 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.036 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.296 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.296 "name": "raid_bdev1", 00:22:59.296 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:22:59.296 "strip_size_kb": 0, 00:22:59.296 "state": "online", 00:22:59.296 "raid_level": "raid1", 00:22:59.296 "superblock": true, 00:22:59.296 "num_base_bdevs": 4, 00:22:59.296 "num_base_bdevs_discovered": 4, 00:22:59.296 "num_base_bdevs_operational": 4, 00:22:59.296 "base_bdevs_list": [ 00:22:59.296 { 00:22:59.296 "name": "BaseBdev1", 00:22:59.296 "uuid": "b479ee11-9b0c-57a7-8dba-a98be2de5706", 00:22:59.296 "is_configured": true, 00:22:59.296 "data_offset": 2048, 00:22:59.296 "data_size": 63488 00:22:59.296 }, 00:22:59.296 { 00:22:59.296 "name": "BaseBdev2", 00:22:59.296 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:22:59.296 "is_configured": true, 00:22:59.296 "data_offset": 2048, 00:22:59.296 "data_size": 63488 00:22:59.296 }, 00:22:59.296 { 00:22:59.296 "name": "BaseBdev3", 00:22:59.296 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:22:59.296 "is_configured": true, 00:22:59.296 "data_offset": 2048, 00:22:59.296 "data_size": 63488 00:22:59.296 }, 00:22:59.296 { 00:22:59.296 "name": "BaseBdev4", 00:22:59.296 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:22:59.296 "is_configured": true, 00:22:59.296 "data_offset": 2048, 00:22:59.296 "data_size": 63488 00:22:59.296 } 00:22:59.296 ] 00:22:59.296 }' 00:22:59.296 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.296 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:59.893 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:59.893 03:17:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:23:00.152 [2024-05-15 03:17:31.225373] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.152 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=63488 00:23:00.152 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.152 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:00.411 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # data_offset=2048 00:23:00.411 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@626 -- # '[' true = true ']' 00:23:00.411 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:00.411 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:00.670 [2024-05-15 03:17:31.608113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b4a870 00:23:00.670 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:00.670 Zero copy mechanism will not be used. 00:23:00.670 Running I/O for 60 seconds... 00:23:00.670 [2024-05-15 03:17:31.734935] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.670 [2024-05-15 03:17:31.735117] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1b4a870 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.670 03:17:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.929 03:17:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.929 "name": "raid_bdev1", 00:23:00.929 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:00.929 "strip_size_kb": 0, 00:23:00.929 "state": "online", 00:23:00.929 "raid_level": "raid1", 00:23:00.929 "superblock": true, 00:23:00.929 "num_base_bdevs": 4, 00:23:00.929 "num_base_bdevs_discovered": 3, 00:23:00.929 "num_base_bdevs_operational": 3, 00:23:00.929 "base_bdevs_list": [ 00:23:00.929 { 00:23:00.929 "name": null, 00:23:00.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.929 "is_configured": false, 00:23:00.929 "data_offset": 2048, 00:23:00.929 "data_size": 63488 00:23:00.929 }, 00:23:00.929 { 00:23:00.929 "name": "BaseBdev2", 00:23:00.929 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:23:00.929 "is_configured": true, 00:23:00.929 "data_offset": 2048, 00:23:00.929 "data_size": 63488 00:23:00.929 }, 00:23:00.929 { 00:23:00.929 "name": "BaseBdev3", 00:23:00.929 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:00.929 "is_configured": true, 00:23:00.929 "data_offset": 2048, 00:23:00.929 "data_size": 63488 00:23:00.929 }, 00:23:00.929 { 00:23:00.929 "name": "BaseBdev4", 00:23:00.929 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:00.929 "is_configured": true, 00:23:00.929 "data_offset": 2048, 00:23:00.929 "data_size": 63488 00:23:00.929 } 00:23:00.929 ] 00:23:00.929 }' 00:23:00.929 03:17:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.929 03:17:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:01.865 03:17:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.865 [2024-05-15 03:17:32.954606] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.865 [2024-05-15 03:17:33.013010] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a3cd10 00:23:01.865 [2024-05-15 03:17:33.015248] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.865 03:17:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # sleep 1 00:23:02.124 [2024-05-15 03:17:33.144199] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:02.124 [2024-05-15 03:17:33.145363] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:02.383 [2024-05-15 03:17:33.370166] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:02.383 [2024-05-15 03:17:33.370345] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:02.641 [2024-05-15 03:17:33.707136] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:02.899 [2024-05-15 03:17:33.838953] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:02.899 [2024-05-15 03:17:33.839521] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.899 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.158 [2024-05-15 03:17:34.185683] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:03.158 [2024-05-15 03:17:34.186005] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:03.158 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:03.158 "name": "raid_bdev1", 00:23:03.158 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:03.158 "strip_size_kb": 0, 00:23:03.158 "state": "online", 00:23:03.158 "raid_level": "raid1", 00:23:03.158 "superblock": true, 00:23:03.158 "num_base_bdevs": 4, 00:23:03.158 "num_base_bdevs_discovered": 4, 00:23:03.158 "num_base_bdevs_operational": 4, 00:23:03.158 "process": { 00:23:03.158 "type": "rebuild", 00:23:03.158 "target": "spare", 00:23:03.158 "progress": { 00:23:03.158 "blocks": 14336, 00:23:03.158 "percent": 22 00:23:03.158 } 00:23:03.158 }, 00:23:03.158 "base_bdevs_list": [ 00:23:03.158 { 00:23:03.158 "name": "spare", 00:23:03.158 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:03.158 "is_configured": true, 00:23:03.158 "data_offset": 2048, 00:23:03.158 "data_size": 63488 00:23:03.158 }, 00:23:03.158 { 00:23:03.158 "name": "BaseBdev2", 00:23:03.158 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:23:03.158 "is_configured": true, 00:23:03.158 "data_offset": 2048, 00:23:03.158 "data_size": 63488 00:23:03.158 }, 00:23:03.158 { 00:23:03.158 "name": "BaseBdev3", 00:23:03.158 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:03.158 "is_configured": true, 00:23:03.158 "data_offset": 2048, 00:23:03.158 "data_size": 63488 00:23:03.158 }, 00:23:03.158 { 00:23:03.158 "name": "BaseBdev4", 00:23:03.158 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:03.158 "is_configured": true, 00:23:03.158 "data_offset": 2048, 00:23:03.158 "data_size": 63488 00:23:03.158 } 00:23:03.158 ] 00:23:03.158 }' 00:23:03.158 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:03.417 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.417 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:03.417 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.417 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:03.417 [2024-05-15 03:17:34.390693] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:03.417 [2024-05-15 03:17:34.390969] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:03.676 [2024-05-15 03:17:34.606104] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.676 [2024-05-15 03:17:34.679039] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.676 [2024-05-15 03:17:34.692730] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.676 [2024-05-15 03:17:34.716902] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1b4a870 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.676 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.934 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.934 "name": "raid_bdev1", 00:23:03.934 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:03.934 "strip_size_kb": 0, 00:23:03.934 "state": "online", 00:23:03.934 "raid_level": "raid1", 00:23:03.934 "superblock": true, 00:23:03.934 "num_base_bdevs": 4, 00:23:03.934 "num_base_bdevs_discovered": 3, 00:23:03.934 "num_base_bdevs_operational": 3, 00:23:03.934 "base_bdevs_list": [ 00:23:03.934 { 00:23:03.934 "name": null, 00:23:03.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.935 "is_configured": false, 00:23:03.935 "data_offset": 2048, 00:23:03.935 "data_size": 63488 00:23:03.935 }, 00:23:03.935 { 00:23:03.935 "name": "BaseBdev2", 00:23:03.935 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:23:03.935 "is_configured": true, 00:23:03.935 "data_offset": 2048, 00:23:03.935 "data_size": 63488 00:23:03.935 }, 00:23:03.935 { 00:23:03.935 "name": "BaseBdev3", 00:23:03.935 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:03.935 "is_configured": true, 00:23:03.935 "data_offset": 2048, 00:23:03.935 "data_size": 63488 00:23:03.935 }, 00:23:03.935 { 00:23:03.935 "name": "BaseBdev4", 00:23:03.935 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:03.935 "is_configured": true, 00:23:03.935 "data_offset": 2048, 00:23:03.935 "data_size": 63488 00:23:03.935 } 00:23:03.935 ] 00:23:03.935 }' 00:23:03.935 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.935 03:17:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.503 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.761 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.761 "name": "raid_bdev1", 00:23:04.761 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:04.761 "strip_size_kb": 0, 00:23:04.761 "state": "online", 00:23:04.761 "raid_level": "raid1", 00:23:04.761 "superblock": true, 00:23:04.761 "num_base_bdevs": 4, 00:23:04.761 "num_base_bdevs_discovered": 3, 00:23:04.761 "num_base_bdevs_operational": 3, 00:23:04.761 "base_bdevs_list": [ 00:23:04.761 { 00:23:04.761 "name": null, 00:23:04.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.761 "is_configured": false, 00:23:04.761 "data_offset": 2048, 00:23:04.761 "data_size": 63488 00:23:04.761 }, 00:23:04.761 { 00:23:04.761 "name": "BaseBdev2", 00:23:04.761 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:23:04.761 "is_configured": true, 00:23:04.761 "data_offset": 2048, 00:23:04.761 "data_size": 63488 00:23:04.761 }, 00:23:04.761 { 00:23:04.761 "name": "BaseBdev3", 00:23:04.761 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:04.761 "is_configured": true, 00:23:04.761 "data_offset": 2048, 00:23:04.761 "data_size": 63488 00:23:04.761 }, 00:23:04.761 { 00:23:04.761 "name": "BaseBdev4", 00:23:04.761 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:04.761 "is_configured": true, 00:23:04.761 "data_offset": 2048, 00:23:04.761 "data_size": 63488 00:23:04.761 } 00:23:04.761 ] 00:23:04.761 }' 00:23:04.761 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.761 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.761 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:05.020 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:05.020 03:17:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:05.020 [2024-05-15 03:17:36.175524] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.278 03:17:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # sleep 1 00:23:05.278 [2024-05-15 03:17:36.262723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b4aad0 00:23:05.278 [2024-05-15 03:17:36.264289] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:05.278 [2024-05-15 03:17:36.410130] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:05.278 [2024-05-15 03:17:36.411379] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:05.537 [2024-05-15 03:17:36.666349] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:05.537 [2024-05-15 03:17:36.666514] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:05.796 [2024-05-15 03:17:36.933581] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:05.796 [2024-05-15 03:17:36.933906] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:06.055 [2024-05-15 03:17:37.064782] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:06.055 [2024-05-15 03:17:37.065362] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.314 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.314 [2024-05-15 03:17:37.414122] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:06.314 [2024-05-15 03:17:37.415287] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.573 "name": "raid_bdev1", 00:23:06.573 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:06.573 "strip_size_kb": 0, 00:23:06.573 "state": "online", 00:23:06.573 "raid_level": "raid1", 00:23:06.573 "superblock": true, 00:23:06.573 "num_base_bdevs": 4, 00:23:06.573 "num_base_bdevs_discovered": 4, 00:23:06.573 "num_base_bdevs_operational": 4, 00:23:06.573 "process": { 00:23:06.573 "type": "rebuild", 00:23:06.573 "target": "spare", 00:23:06.573 "progress": { 00:23:06.573 "blocks": 14336, 00:23:06.573 "percent": 22 00:23:06.573 } 00:23:06.573 }, 00:23:06.573 "base_bdevs_list": [ 00:23:06.573 { 00:23:06.573 "name": "spare", 00:23:06.573 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:06.573 "is_configured": true, 00:23:06.573 "data_offset": 2048, 00:23:06.573 "data_size": 63488 00:23:06.573 }, 00:23:06.573 { 00:23:06.573 "name": "BaseBdev2", 00:23:06.573 "uuid": "d646bcc9-ed0f-556f-8a1b-a8055fbc5d19", 00:23:06.573 "is_configured": true, 00:23:06.573 "data_offset": 2048, 00:23:06.573 "data_size": 63488 00:23:06.573 }, 00:23:06.573 { 00:23:06.573 "name": "BaseBdev3", 00:23:06.573 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:06.573 "is_configured": true, 00:23:06.573 "data_offset": 2048, 00:23:06.573 "data_size": 63488 00:23:06.573 }, 00:23:06.573 { 00:23:06.573 "name": "BaseBdev4", 00:23:06.573 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:06.573 "is_configured": true, 00:23:06.573 "data_offset": 2048, 00:23:06.573 "data_size": 63488 00:23:06.573 } 00:23:06.573 ] 00:23:06.573 }' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:23:06.573 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=4 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # '[' 4 -gt 2 ']' 00:23:06.573 03:17:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@700 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:06.573 [2024-05-15 03:17:37.647401] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:06.832 [2024-05-15 03:17:37.835784] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:07.091 [2024-05-15 03:17:37.996235] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1b4a870 00:23:07.091 [2024-05-15 03:17:37.996268] bdev_raid.c:1957:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1b4aad0 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@703 -- # base_bdevs[1]= 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@704 -- # (( num_base_bdevs_operational-- )) 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.091 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.091 [2024-05-15 03:17:38.138918] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:07.350 [2024-05-15 03:17:38.361573] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:07.350 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.350 "name": "raid_bdev1", 00:23:07.350 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:07.350 "strip_size_kb": 0, 00:23:07.350 "state": "online", 00:23:07.350 "raid_level": "raid1", 00:23:07.350 "superblock": true, 00:23:07.350 "num_base_bdevs": 4, 00:23:07.350 "num_base_bdevs_discovered": 3, 00:23:07.350 "num_base_bdevs_operational": 3, 00:23:07.350 "process": { 00:23:07.350 "type": "rebuild", 00:23:07.350 "target": "spare", 00:23:07.350 "progress": { 00:23:07.350 "blocks": 20480, 00:23:07.350 "percent": 32 00:23:07.350 } 00:23:07.350 }, 00:23:07.350 "base_bdevs_list": [ 00:23:07.350 { 00:23:07.350 "name": "spare", 00:23:07.350 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:07.350 "is_configured": true, 00:23:07.350 "data_offset": 2048, 00:23:07.350 "data_size": 63488 00:23:07.350 }, 00:23:07.350 { 00:23:07.351 "name": null, 00:23:07.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.351 "is_configured": false, 00:23:07.351 "data_offset": 2048, 00:23:07.351 "data_size": 63488 00:23:07.351 }, 00:23:07.351 { 00:23:07.351 "name": "BaseBdev3", 00:23:07.351 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:07.351 "is_configured": true, 00:23:07.351 "data_offset": 2048, 00:23:07.351 "data_size": 63488 00:23:07.351 }, 00:23:07.351 { 00:23:07.351 "name": "BaseBdev4", 00:23:07.351 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:07.351 "is_configured": true, 00:23:07.351 "data_offset": 2048, 00:23:07.351 "data_size": 63488 00:23:07.351 } 00:23:07.351 ] 00:23:07.351 }' 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # local timeout=835 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.351 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.609 [2024-05-15 03:17:38.565563] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:07.609 [2024-05-15 03:17:38.667313] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:07.609 [2024-05-15 03:17:38.667531] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:07.609 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.609 "name": "raid_bdev1", 00:23:07.609 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:07.609 "strip_size_kb": 0, 00:23:07.609 "state": "online", 00:23:07.609 "raid_level": "raid1", 00:23:07.609 "superblock": true, 00:23:07.609 "num_base_bdevs": 4, 00:23:07.609 "num_base_bdevs_discovered": 3, 00:23:07.609 "num_base_bdevs_operational": 3, 00:23:07.609 "process": { 00:23:07.609 "type": "rebuild", 00:23:07.609 "target": "spare", 00:23:07.609 "progress": { 00:23:07.609 "blocks": 28672, 00:23:07.609 "percent": 45 00:23:07.609 } 00:23:07.609 }, 00:23:07.609 "base_bdevs_list": [ 00:23:07.609 { 00:23:07.609 "name": "spare", 00:23:07.609 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:07.609 "is_configured": true, 00:23:07.609 "data_offset": 2048, 00:23:07.609 "data_size": 63488 00:23:07.609 }, 00:23:07.609 { 00:23:07.609 "name": null, 00:23:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.609 "is_configured": false, 00:23:07.609 "data_offset": 2048, 00:23:07.609 "data_size": 63488 00:23:07.609 }, 00:23:07.609 { 00:23:07.609 "name": "BaseBdev3", 00:23:07.609 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:07.609 "is_configured": true, 00:23:07.609 "data_offset": 2048, 00:23:07.609 "data_size": 63488 00:23:07.609 }, 00:23:07.609 { 00:23:07.609 "name": "BaseBdev4", 00:23:07.609 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:07.609 "is_configured": true, 00:23:07.609 "data_offset": 2048, 00:23:07.609 "data_size": 63488 00:23:07.609 } 00:23:07.609 ] 00:23:07.609 }' 00:23:07.609 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.609 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.609 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.868 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.868 03:17:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:23:07.868 [2024-05-15 03:17:38.929741] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:08.126 [2024-05-15 03:17:39.050341] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:08.385 [2024-05-15 03:17:39.495172] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.951 03:17:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.951 [2024-05-15 03:17:39.819994] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:08.951 [2024-05-15 03:17:39.931523] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:08.951 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:08.951 "name": "raid_bdev1", 00:23:08.951 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:08.951 "strip_size_kb": 0, 00:23:08.951 "state": "online", 00:23:08.951 "raid_level": "raid1", 00:23:08.951 "superblock": true, 00:23:08.951 "num_base_bdevs": 4, 00:23:08.951 "num_base_bdevs_discovered": 3, 00:23:08.951 "num_base_bdevs_operational": 3, 00:23:08.952 "process": { 00:23:08.952 "type": "rebuild", 00:23:08.952 "target": "spare", 00:23:08.952 "progress": { 00:23:08.952 "blocks": 47104, 00:23:08.952 "percent": 74 00:23:08.952 } 00:23:08.952 }, 00:23:08.952 "base_bdevs_list": [ 00:23:08.952 { 00:23:08.952 "name": "spare", 00:23:08.952 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:08.952 "is_configured": true, 00:23:08.952 "data_offset": 2048, 00:23:08.952 "data_size": 63488 00:23:08.952 }, 00:23:08.952 { 00:23:08.952 "name": null, 00:23:08.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.952 "is_configured": false, 00:23:08.952 "data_offset": 2048, 00:23:08.952 "data_size": 63488 00:23:08.952 }, 00:23:08.952 { 00:23:08.952 "name": "BaseBdev3", 00:23:08.952 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:08.952 "is_configured": true, 00:23:08.952 "data_offset": 2048, 00:23:08.952 "data_size": 63488 00:23:08.952 }, 00:23:08.952 { 00:23:08.952 "name": "BaseBdev4", 00:23:08.952 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:08.952 "is_configured": true, 00:23:08.952 "data_offset": 2048, 00:23:08.952 "data_size": 63488 00:23:08.952 } 00:23:08.952 ] 00:23:08.952 }' 00:23:08.952 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.210 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.210 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.210 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.210 03:17:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # sleep 1 00:23:09.210 [2024-05-15 03:17:40.294534] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:09.776 [2024-05-15 03:17:40.750206] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:10.035 [2024-05-15 03:17:40.982153] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:10.035 [2024-05-15 03:17:41.091152] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:10.035 [2024-05-15 03:17:41.094487] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.035 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.293 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:10.293 "name": "raid_bdev1", 00:23:10.293 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:10.293 "strip_size_kb": 0, 00:23:10.293 "state": "online", 00:23:10.293 "raid_level": "raid1", 00:23:10.293 "superblock": true, 00:23:10.293 "num_base_bdevs": 4, 00:23:10.293 "num_base_bdevs_discovered": 3, 00:23:10.293 "num_base_bdevs_operational": 3, 00:23:10.293 "base_bdevs_list": [ 00:23:10.293 { 00:23:10.293 "name": "spare", 00:23:10.293 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:10.293 "is_configured": true, 00:23:10.293 "data_offset": 2048, 00:23:10.293 "data_size": 63488 00:23:10.293 }, 00:23:10.293 { 00:23:10.293 "name": null, 00:23:10.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.293 "is_configured": false, 00:23:10.293 "data_offset": 2048, 00:23:10.293 "data_size": 63488 00:23:10.293 }, 00:23:10.293 { 00:23:10.293 "name": "BaseBdev3", 00:23:10.293 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:10.293 "is_configured": true, 00:23:10.293 "data_offset": 2048, 00:23:10.293 "data_size": 63488 00:23:10.293 }, 00:23:10.293 { 00:23:10.293 "name": "BaseBdev4", 00:23:10.293 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:10.293 "is_configured": true, 00:23:10.293 "data_offset": 2048, 00:23:10.293 "data_size": 63488 00:23:10.293 } 00:23:10.293 ] 00:23:10.293 }' 00:23:10.293 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # break 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.552 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.811 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:10.811 "name": "raid_bdev1", 00:23:10.811 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:10.811 "strip_size_kb": 0, 00:23:10.811 "state": "online", 00:23:10.811 "raid_level": "raid1", 00:23:10.811 "superblock": true, 00:23:10.811 "num_base_bdevs": 4, 00:23:10.812 "num_base_bdevs_discovered": 3, 00:23:10.812 "num_base_bdevs_operational": 3, 00:23:10.812 "base_bdevs_list": [ 00:23:10.812 { 00:23:10.812 "name": "spare", 00:23:10.812 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:10.812 "is_configured": true, 00:23:10.812 "data_offset": 2048, 00:23:10.812 "data_size": 63488 00:23:10.812 }, 00:23:10.812 { 00:23:10.812 "name": null, 00:23:10.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.812 "is_configured": false, 00:23:10.812 "data_offset": 2048, 00:23:10.812 "data_size": 63488 00:23:10.812 }, 00:23:10.812 { 00:23:10.812 "name": "BaseBdev3", 00:23:10.812 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:10.812 "is_configured": true, 00:23:10.812 "data_offset": 2048, 00:23:10.812 "data_size": 63488 00:23:10.812 }, 00:23:10.812 { 00:23:10.812 "name": "BaseBdev4", 00:23:10.812 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:10.812 "is_configured": true, 00:23:10.812 "data_offset": 2048, 00:23:10.812 "data_size": 63488 00:23:10.812 } 00:23:10.812 ] 00:23:10.812 }' 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.812 03:17:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.070 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.070 "name": "raid_bdev1", 00:23:11.070 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:11.070 "strip_size_kb": 0, 00:23:11.070 "state": "online", 00:23:11.070 "raid_level": "raid1", 00:23:11.070 "superblock": true, 00:23:11.070 "num_base_bdevs": 4, 00:23:11.070 "num_base_bdevs_discovered": 3, 00:23:11.070 "num_base_bdevs_operational": 3, 00:23:11.070 "base_bdevs_list": [ 00:23:11.070 { 00:23:11.070 "name": "spare", 00:23:11.070 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:11.070 "is_configured": true, 00:23:11.070 "data_offset": 2048, 00:23:11.070 "data_size": 63488 00:23:11.070 }, 00:23:11.070 { 00:23:11.070 "name": null, 00:23:11.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.070 "is_configured": false, 00:23:11.070 "data_offset": 2048, 00:23:11.070 "data_size": 63488 00:23:11.070 }, 00:23:11.070 { 00:23:11.070 "name": "BaseBdev3", 00:23:11.070 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:11.070 "is_configured": true, 00:23:11.070 "data_offset": 2048, 00:23:11.070 "data_size": 63488 00:23:11.070 }, 00:23:11.070 { 00:23:11.070 "name": "BaseBdev4", 00:23:11.070 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:11.070 "is_configured": true, 00:23:11.070 "data_offset": 2048, 00:23:11.070 "data_size": 63488 00:23:11.070 } 00:23:11.070 ] 00:23:11.070 }' 00:23:11.070 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.070 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:11.638 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:11.638 [2024-05-15 03:17:42.659685] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.638 [2024-05-15 03:17:42.659714] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.638 00:23:11.638 Latency(us) 00:23:11.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.638 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:11.638 raid_bdev1 : 11.12 82.54 247.61 0.00 0.00 15502.78 306.22 112846.75 00:23:11.638 =================================================================================================================== 00:23:11.638 Total : 82.54 247.61 0.00 0.00 15502.78 306.22 112846.75 00:23:11.638 [2024-05-15 03:17:42.764149] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.638 [2024-05-15 03:17:42.764177] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.638 [2024-05-15 03:17:42.764275] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.638 [2024-05-15 03:17:42.764284] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x199f6b0 name raid_bdev1, state offline 00:23:11.638 0 00:23:11.638 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.638 03:17:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # jq length 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # '[' true = true ']' 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.897 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:12.156 /dev/nbd0 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:12.156 1+0 records in 00:23:12.156 1+0 records out 00:23:12.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226089 s, 18.1 MB/s 00:23:12.156 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # '[' -z '' ']' 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # continue 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev3 ']' 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.415 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:12.415 /dev/nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:12.673 1+0 records in 00:23:12.673 1+0 records out 00:23:12.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018734 s, 21.9 MB/s 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@736 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.673 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # for bdev in "${base_bdevs[@]:1}" 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # '[' -z BaseBdev4 ']' 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.932 03:17:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:13.191 /dev/nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:13.191 1+0 records in 00:23:13.191 1+0 records out 00:23:13.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021666 s, 18.9 MB/s 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@736 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.191 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.470 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:23:13.739 03:17:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:13.997 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:14.257 [2024-05-15 03:17:45.324287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:14.257 [2024-05-15 03:17:45.324330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.257 [2024-05-15 03:17:45.324348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4a5e0 00:23:14.257 [2024-05-15 03:17:45.324362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.257 [2024-05-15 03:17:45.326038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.257 [2024-05-15 03:17:45.326065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:14.257 [2024-05-15 03:17:45.326127] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:14.257 [2024-05-15 03:17:45.326152] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.257 BaseBdev1 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z '' ']' 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # continue 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev3 ']' 00:23:14.257 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:14.516 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:14.774 [2024-05-15 03:17:45.829695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:14.774 [2024-05-15 03:17:45.829731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.774 [2024-05-15 03:17:45.829747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4aa20 00:23:14.775 [2024-05-15 03:17:45.829756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.775 [2024-05-15 03:17:45.830092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.775 [2024-05-15 03:17:45.830108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:14.775 [2024-05-15 03:17:45.830168] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:14.775 [2024-05-15 03:17:45.830177] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:14.775 [2024-05-15 03:17:45.830184] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.775 [2024-05-15 03:17:45.830197] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x199d610 name raid_bdev1, state configuring 00:23:14.775 [2024-05-15 03:17:45.830224] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:14.775 BaseBdev3 00:23:14.775 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:23:14.775 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev4 ']' 00:23:14.775 03:17:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:15.033 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:15.291 [2024-05-15 03:17:46.339149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:15.291 [2024-05-15 03:17:46.339184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.291 [2024-05-15 03:17:46.339198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ebf30 00:23:15.291 [2024-05-15 03:17:46.339208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.291 [2024-05-15 03:17:46.339498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.291 [2024-05-15 03:17:46.339512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:15.291 [2024-05-15 03:17:46.339569] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:15.291 [2024-05-15 03:17:46.339586] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:15.291 BaseBdev4 00:23:15.291 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:15.549 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:15.808 [2024-05-15 03:17:46.900741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:15.808 [2024-05-15 03:17:46.900774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.808 [2024-05-15 03:17:46.900788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a5d30 00:23:15.808 [2024-05-15 03:17:46.900797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.808 [2024-05-15 03:17:46.901135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.808 [2024-05-15 03:17:46.901150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:15.808 [2024-05-15 03:17:46.901217] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:15.808 [2024-05-15 03:17:46.901233] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.808 spare 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.808 03:17:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.066 [2024-05-15 03:17:47.001565] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x19ef6c0 00:23:16.066 [2024-05-15 03:17:47.001579] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:16.066 [2024-05-15 03:17:47.001764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199d910 00:23:16.066 [2024-05-15 03:17:47.001926] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19ef6c0 00:23:16.066 [2024-05-15 03:17:47.001935] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19ef6c0 00:23:16.066 [2024-05-15 03:17:47.002048] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.066 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.066 "name": "raid_bdev1", 00:23:16.066 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:16.066 "strip_size_kb": 0, 00:23:16.066 "state": "online", 00:23:16.066 "raid_level": "raid1", 00:23:16.066 "superblock": true, 00:23:16.066 "num_base_bdevs": 4, 00:23:16.066 "num_base_bdevs_discovered": 3, 00:23:16.066 "num_base_bdevs_operational": 3, 00:23:16.066 "base_bdevs_list": [ 00:23:16.066 { 00:23:16.066 "name": "spare", 00:23:16.066 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:16.066 "is_configured": true, 00:23:16.066 "data_offset": 2048, 00:23:16.066 "data_size": 63488 00:23:16.066 }, 00:23:16.066 { 00:23:16.066 "name": null, 00:23:16.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.066 "is_configured": false, 00:23:16.066 "data_offset": 2048, 00:23:16.066 "data_size": 63488 00:23:16.066 }, 00:23:16.066 { 00:23:16.066 "name": "BaseBdev3", 00:23:16.066 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:16.066 "is_configured": true, 00:23:16.066 "data_offset": 2048, 00:23:16.066 "data_size": 63488 00:23:16.066 }, 00:23:16.066 { 00:23:16.066 "name": "BaseBdev4", 00:23:16.066 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:16.066 "is_configured": true, 00:23:16.066 "data_offset": 2048, 00:23:16.067 "data_size": 63488 00:23:16.067 } 00:23:16.067 ] 00:23:16.067 }' 00:23:16.067 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.067 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.002 03:17:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:17.002 "name": "raid_bdev1", 00:23:17.002 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:17.002 "strip_size_kb": 0, 00:23:17.002 "state": "online", 00:23:17.002 "raid_level": "raid1", 00:23:17.002 "superblock": true, 00:23:17.002 "num_base_bdevs": 4, 00:23:17.002 "num_base_bdevs_discovered": 3, 00:23:17.002 "num_base_bdevs_operational": 3, 00:23:17.002 "base_bdevs_list": [ 00:23:17.002 { 00:23:17.002 "name": "spare", 00:23:17.002 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:17.002 "is_configured": true, 00:23:17.002 "data_offset": 2048, 00:23:17.002 "data_size": 63488 00:23:17.002 }, 00:23:17.002 { 00:23:17.002 "name": null, 00:23:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.002 "is_configured": false, 00:23:17.002 "data_offset": 2048, 00:23:17.002 "data_size": 63488 00:23:17.002 }, 00:23:17.002 { 00:23:17.002 "name": "BaseBdev3", 00:23:17.002 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:17.002 "is_configured": true, 00:23:17.002 "data_offset": 2048, 00:23:17.002 "data_size": 63488 00:23:17.002 }, 00:23:17.002 { 00:23:17.002 "name": "BaseBdev4", 00:23:17.002 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:17.002 "is_configured": true, 00:23:17.002 "data_offset": 2048, 00:23:17.002 "data_size": 63488 00:23:17.002 } 00:23:17.002 ] 00:23:17.002 }' 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.002 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:17.260 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.260 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:17.519 [2024-05-15 03:17:48.637732] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.519 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.777 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.777 "name": "raid_bdev1", 00:23:17.777 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:17.777 "strip_size_kb": 0, 00:23:17.777 "state": "online", 00:23:17.777 "raid_level": "raid1", 00:23:17.777 "superblock": true, 00:23:17.777 "num_base_bdevs": 4, 00:23:17.777 "num_base_bdevs_discovered": 2, 00:23:17.777 "num_base_bdevs_operational": 2, 00:23:17.777 "base_bdevs_list": [ 00:23:17.777 { 00:23:17.777 "name": null, 00:23:17.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.777 "is_configured": false, 00:23:17.777 "data_offset": 2048, 00:23:17.777 "data_size": 63488 00:23:17.777 }, 00:23:17.777 { 00:23:17.777 "name": null, 00:23:17.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.777 "is_configured": false, 00:23:17.777 "data_offset": 2048, 00:23:17.777 "data_size": 63488 00:23:17.777 }, 00:23:17.777 { 00:23:17.777 "name": "BaseBdev3", 00:23:17.777 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:17.777 "is_configured": true, 00:23:17.777 "data_offset": 2048, 00:23:17.777 "data_size": 63488 00:23:17.777 }, 00:23:17.777 { 00:23:17.777 "name": "BaseBdev4", 00:23:17.777 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:17.777 "is_configured": true, 00:23:17.777 "data_offset": 2048, 00:23:17.777 "data_size": 63488 00:23:17.777 } 00:23:17.777 ] 00:23:17.777 }' 00:23:17.777 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.777 03:17:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:18.713 03:17:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:18.713 [2024-05-15 03:17:49.768987] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:18.714 [2024-05-15 03:17:49.769136] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:18.714 [2024-05-15 03:17:49.769149] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:18.714 [2024-05-15 03:17:49.769174] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:18.714 [2024-05-15 03:17:49.773437] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199d910 00:23:18.714 [2024-05-15 03:17:49.775544] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:18.714 03:17:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # sleep 1 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.650 03:17:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.909 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.909 "name": "raid_bdev1", 00:23:19.909 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:19.909 "strip_size_kb": 0, 00:23:19.909 "state": "online", 00:23:19.909 "raid_level": "raid1", 00:23:19.909 "superblock": true, 00:23:19.909 "num_base_bdevs": 4, 00:23:19.909 "num_base_bdevs_discovered": 3, 00:23:19.909 "num_base_bdevs_operational": 3, 00:23:19.909 "process": { 00:23:19.909 "type": "rebuild", 00:23:19.909 "target": "spare", 00:23:19.909 "progress": { 00:23:19.909 "blocks": 24576, 00:23:19.909 "percent": 38 00:23:19.909 } 00:23:19.909 }, 00:23:19.909 "base_bdevs_list": [ 00:23:19.909 { 00:23:19.909 "name": "spare", 00:23:19.909 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:19.909 "is_configured": true, 00:23:19.909 "data_offset": 2048, 00:23:19.909 "data_size": 63488 00:23:19.909 }, 00:23:19.909 { 00:23:19.909 "name": null, 00:23:19.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.909 "is_configured": false, 00:23:19.909 "data_offset": 2048, 00:23:19.909 "data_size": 63488 00:23:19.909 }, 00:23:19.909 { 00:23:19.909 "name": "BaseBdev3", 00:23:19.909 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:19.909 "is_configured": true, 00:23:19.909 "data_offset": 2048, 00:23:19.909 "data_size": 63488 00:23:19.909 }, 00:23:19.909 { 00:23:19.909 "name": "BaseBdev4", 00:23:19.909 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:19.909 "is_configured": true, 00:23:19.909 "data_offset": 2048, 00:23:19.909 "data_size": 63488 00:23:19.909 } 00:23:19.909 ] 00:23:19.909 }' 00:23:19.909 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.168 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.168 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.168 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.168 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:20.427 [2024-05-15 03:17:51.380904] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:20.427 [2024-05-15 03:17:51.387982] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:20.427 [2024-05-15 03:17:51.388028] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.427 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.686 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.686 "name": "raid_bdev1", 00:23:20.686 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:20.686 "strip_size_kb": 0, 00:23:20.686 "state": "online", 00:23:20.686 "raid_level": "raid1", 00:23:20.686 "superblock": true, 00:23:20.686 "num_base_bdevs": 4, 00:23:20.686 "num_base_bdevs_discovered": 2, 00:23:20.686 "num_base_bdevs_operational": 2, 00:23:20.686 "base_bdevs_list": [ 00:23:20.686 { 00:23:20.686 "name": null, 00:23:20.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.686 "is_configured": false, 00:23:20.686 "data_offset": 2048, 00:23:20.686 "data_size": 63488 00:23:20.686 }, 00:23:20.686 { 00:23:20.686 "name": null, 00:23:20.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.686 "is_configured": false, 00:23:20.686 "data_offset": 2048, 00:23:20.686 "data_size": 63488 00:23:20.686 }, 00:23:20.686 { 00:23:20.686 "name": "BaseBdev3", 00:23:20.686 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:20.686 "is_configured": true, 00:23:20.686 "data_offset": 2048, 00:23:20.686 "data_size": 63488 00:23:20.686 }, 00:23:20.686 { 00:23:20.686 "name": "BaseBdev4", 00:23:20.686 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:20.686 "is_configured": true, 00:23:20.686 "data_offset": 2048, 00:23:20.686 "data_size": 63488 00:23:20.686 } 00:23:20.686 ] 00:23:20.686 }' 00:23:20.686 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.686 03:17:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:21.253 03:17:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:21.512 [2024-05-15 03:17:52.511362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:21.512 [2024-05-15 03:17:52.511406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.512 [2024-05-15 03:17:52.511425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19eff70 00:23:21.512 [2024-05-15 03:17:52.511434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.512 [2024-05-15 03:17:52.511811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.512 [2024-05-15 03:17:52.511826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:21.512 [2024-05-15 03:17:52.511913] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:21.512 [2024-05-15 03:17:52.511924] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:21.512 [2024-05-15 03:17:52.511932] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:21.512 [2024-05-15 03:17:52.511948] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:21.512 [2024-05-15 03:17:52.516238] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199d910 00:23:21.512 spare 00:23:21.512 [2024-05-15 03:17:52.517774] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:21.512 03:17:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.447 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.705 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.705 "name": "raid_bdev1", 00:23:22.705 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:22.705 "strip_size_kb": 0, 00:23:22.705 "state": "online", 00:23:22.705 "raid_level": "raid1", 00:23:22.705 "superblock": true, 00:23:22.705 "num_base_bdevs": 4, 00:23:22.705 "num_base_bdevs_discovered": 3, 00:23:22.705 "num_base_bdevs_operational": 3, 00:23:22.705 "process": { 00:23:22.705 "type": "rebuild", 00:23:22.705 "target": "spare", 00:23:22.705 "progress": { 00:23:22.705 "blocks": 24576, 00:23:22.705 "percent": 38 00:23:22.705 } 00:23:22.705 }, 00:23:22.705 "base_bdevs_list": [ 00:23:22.705 { 00:23:22.705 "name": "spare", 00:23:22.705 "uuid": "bf1d6882-df47-5bcd-8aae-30dd84fde95f", 00:23:22.705 "is_configured": true, 00:23:22.705 "data_offset": 2048, 00:23:22.705 "data_size": 63488 00:23:22.705 }, 00:23:22.706 { 00:23:22.706 "name": null, 00:23:22.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.706 "is_configured": false, 00:23:22.706 "data_offset": 2048, 00:23:22.706 "data_size": 63488 00:23:22.706 }, 00:23:22.706 { 00:23:22.706 "name": "BaseBdev3", 00:23:22.706 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:22.706 "is_configured": true, 00:23:22.706 "data_offset": 2048, 00:23:22.706 "data_size": 63488 00:23:22.706 }, 00:23:22.706 { 00:23:22.706 "name": "BaseBdev4", 00:23:22.706 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:22.706 "is_configured": true, 00:23:22.706 "data_offset": 2048, 00:23:22.706 "data_size": 63488 00:23:22.706 } 00:23:22.706 ] 00:23:22.706 }' 00:23:22.706 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.706 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.706 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.963 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.963 03:17:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:22.963 [2024-05-15 03:17:54.117737] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:23.222 [2024-05-15 03:17:54.130171] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:23.222 [2024-05-15 03:17:54.130213] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.222 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.480 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.480 "name": "raid_bdev1", 00:23:23.480 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:23.480 "strip_size_kb": 0, 00:23:23.480 "state": "online", 00:23:23.480 "raid_level": "raid1", 00:23:23.480 "superblock": true, 00:23:23.480 "num_base_bdevs": 4, 00:23:23.480 "num_base_bdevs_discovered": 2, 00:23:23.480 "num_base_bdevs_operational": 2, 00:23:23.480 "base_bdevs_list": [ 00:23:23.480 { 00:23:23.480 "name": null, 00:23:23.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.480 "is_configured": false, 00:23:23.480 "data_offset": 2048, 00:23:23.480 "data_size": 63488 00:23:23.480 }, 00:23:23.480 { 00:23:23.480 "name": null, 00:23:23.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.480 "is_configured": false, 00:23:23.480 "data_offset": 2048, 00:23:23.480 "data_size": 63488 00:23:23.480 }, 00:23:23.480 { 00:23:23.480 "name": "BaseBdev3", 00:23:23.480 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:23.480 "is_configured": true, 00:23:23.480 "data_offset": 2048, 00:23:23.480 "data_size": 63488 00:23:23.480 }, 00:23:23.480 { 00:23:23.480 "name": "BaseBdev4", 00:23:23.480 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:23.480 "is_configured": true, 00:23:23.480 "data_offset": 2048, 00:23:23.480 "data_size": 63488 00:23:23.480 } 00:23:23.480 ] 00:23:23.480 }' 00:23:23.480 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.480 03:17:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.048 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.306 "name": "raid_bdev1", 00:23:24.306 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:24.306 "strip_size_kb": 0, 00:23:24.306 "state": "online", 00:23:24.306 "raid_level": "raid1", 00:23:24.306 "superblock": true, 00:23:24.306 "num_base_bdevs": 4, 00:23:24.306 "num_base_bdevs_discovered": 2, 00:23:24.306 "num_base_bdevs_operational": 2, 00:23:24.306 "base_bdevs_list": [ 00:23:24.306 { 00:23:24.306 "name": null, 00:23:24.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.306 "is_configured": false, 00:23:24.306 "data_offset": 2048, 00:23:24.306 "data_size": 63488 00:23:24.306 }, 00:23:24.306 { 00:23:24.306 "name": null, 00:23:24.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.306 "is_configured": false, 00:23:24.306 "data_offset": 2048, 00:23:24.306 "data_size": 63488 00:23:24.306 }, 00:23:24.306 { 00:23:24.306 "name": "BaseBdev3", 00:23:24.306 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:24.306 "is_configured": true, 00:23:24.306 "data_offset": 2048, 00:23:24.306 "data_size": 63488 00:23:24.306 }, 00:23:24.306 { 00:23:24.306 "name": "BaseBdev4", 00:23:24.306 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:24.306 "is_configured": true, 00:23:24.306 "data_offset": 2048, 00:23:24.306 "data_size": 63488 00:23:24.306 } 00:23:24.306 ] 00:23:24.306 }' 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:24.306 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:24.565 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:24.825 [2024-05-15 03:17:55.875237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:24.825 [2024-05-15 03:17:55.875280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.825 [2024-05-15 03:17:55.875298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b4a5e0 00:23:24.825 [2024-05-15 03:17:55.875307] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.825 [2024-05-15 03:17:55.875653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.825 [2024-05-15 03:17:55.875669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:24.825 [2024-05-15 03:17:55.875730] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:24.825 [2024-05-15 03:17:55.875741] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:23:24.825 [2024-05-15 03:17:55.875749] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:24.825 BaseBdev1 00:23:24.825 03:17:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # sleep 1 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.760 03:17:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.018 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.018 "name": "raid_bdev1", 00:23:26.018 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:26.018 "strip_size_kb": 0, 00:23:26.018 "state": "online", 00:23:26.018 "raid_level": "raid1", 00:23:26.018 "superblock": true, 00:23:26.018 "num_base_bdevs": 4, 00:23:26.018 "num_base_bdevs_discovered": 2, 00:23:26.018 "num_base_bdevs_operational": 2, 00:23:26.018 "base_bdevs_list": [ 00:23:26.018 { 00:23:26.018 "name": null, 00:23:26.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.018 "is_configured": false, 00:23:26.018 "data_offset": 2048, 00:23:26.018 "data_size": 63488 00:23:26.018 }, 00:23:26.018 { 00:23:26.018 "name": null, 00:23:26.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.018 "is_configured": false, 00:23:26.018 "data_offset": 2048, 00:23:26.018 "data_size": 63488 00:23:26.019 }, 00:23:26.019 { 00:23:26.019 "name": "BaseBdev3", 00:23:26.019 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:26.019 "is_configured": true, 00:23:26.019 "data_offset": 2048, 00:23:26.019 "data_size": 63488 00:23:26.019 }, 00:23:26.019 { 00:23:26.019 "name": "BaseBdev4", 00:23:26.019 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:26.019 "is_configured": true, 00:23:26.019 "data_offset": 2048, 00:23:26.019 "data_size": 63488 00:23:26.019 } 00:23:26.019 ] 00:23:26.019 }' 00:23:26.019 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.019 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.955 03:17:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.955 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:26.955 "name": "raid_bdev1", 00:23:26.955 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:26.955 "strip_size_kb": 0, 00:23:26.955 "state": "online", 00:23:26.955 "raid_level": "raid1", 00:23:26.955 "superblock": true, 00:23:26.955 "num_base_bdevs": 4, 00:23:26.955 "num_base_bdevs_discovered": 2, 00:23:26.955 "num_base_bdevs_operational": 2, 00:23:26.955 "base_bdevs_list": [ 00:23:26.955 { 00:23:26.955 "name": null, 00:23:26.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.955 "is_configured": false, 00:23:26.955 "data_offset": 2048, 00:23:26.955 "data_size": 63488 00:23:26.955 }, 00:23:26.955 { 00:23:26.955 "name": null, 00:23:26.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.955 "is_configured": false, 00:23:26.955 "data_offset": 2048, 00:23:26.955 "data_size": 63488 00:23:26.955 }, 00:23:26.955 { 00:23:26.955 "name": "BaseBdev3", 00:23:26.955 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:26.955 "is_configured": true, 00:23:26.955 "data_offset": 2048, 00:23:26.955 "data_size": 63488 00:23:26.955 }, 00:23:26.955 { 00:23:26.955 "name": "BaseBdev4", 00:23:26.955 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:26.955 "is_configured": true, 00:23:26.955 "data_offset": 2048, 00:23:26.955 "data_size": 63488 00:23:26.955 } 00:23:26.955 ] 00:23:26.955 }' 00:23:26.955 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:26.955 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:26.955 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:27.214 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:27.214 [2024-05-15 03:17:58.362244] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.214 [2024-05-15 03:17:58.362360] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:23:27.214 [2024-05-15 03:17:58.362373] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:27.214 request: 00:23:27.214 { 00:23:27.214 "raid_bdev": "raid_bdev1", 00:23:27.214 "base_bdev": "BaseBdev1", 00:23:27.214 "method": "bdev_raid_add_base_bdev", 00:23:27.214 "req_id": 1 00:23:27.214 } 00:23:27.214 Got JSON-RPC error response 00:23:27.214 response: 00:23:27.214 { 00:23:27.214 "code": -22, 00:23:27.214 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:27.214 } 00:23:27.472 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:23:27.472 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.472 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.472 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.472 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # sleep 1 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.439 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.706 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.706 "name": "raid_bdev1", 00:23:28.706 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:28.706 "strip_size_kb": 0, 00:23:28.706 "state": "online", 00:23:28.706 "raid_level": "raid1", 00:23:28.706 "superblock": true, 00:23:28.706 "num_base_bdevs": 4, 00:23:28.706 "num_base_bdevs_discovered": 2, 00:23:28.706 "num_base_bdevs_operational": 2, 00:23:28.706 "base_bdevs_list": [ 00:23:28.706 { 00:23:28.706 "name": null, 00:23:28.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.706 "is_configured": false, 00:23:28.706 "data_offset": 2048, 00:23:28.706 "data_size": 63488 00:23:28.706 }, 00:23:28.706 { 00:23:28.706 "name": null, 00:23:28.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.706 "is_configured": false, 00:23:28.706 "data_offset": 2048, 00:23:28.706 "data_size": 63488 00:23:28.706 }, 00:23:28.706 { 00:23:28.706 "name": "BaseBdev3", 00:23:28.706 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:28.706 "is_configured": true, 00:23:28.706 "data_offset": 2048, 00:23:28.706 "data_size": 63488 00:23:28.706 }, 00:23:28.706 { 00:23:28.706 "name": "BaseBdev4", 00:23:28.706 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:28.706 "is_configured": true, 00:23:28.706 "data_offset": 2048, 00:23:28.706 "data_size": 63488 00:23:28.706 } 00:23:28.706 ] 00:23:28.706 }' 00:23:28.706 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.706 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.273 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:29.532 "name": "raid_bdev1", 00:23:29.532 "uuid": "a714f859-c8ba-4de0-8db9-fe8ce264c600", 00:23:29.532 "strip_size_kb": 0, 00:23:29.532 "state": "online", 00:23:29.532 "raid_level": "raid1", 00:23:29.532 "superblock": true, 00:23:29.532 "num_base_bdevs": 4, 00:23:29.532 "num_base_bdevs_discovered": 2, 00:23:29.532 "num_base_bdevs_operational": 2, 00:23:29.532 "base_bdevs_list": [ 00:23:29.532 { 00:23:29.532 "name": null, 00:23:29.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.532 "is_configured": false, 00:23:29.532 "data_offset": 2048, 00:23:29.532 "data_size": 63488 00:23:29.532 }, 00:23:29.532 { 00:23:29.532 "name": null, 00:23:29.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.532 "is_configured": false, 00:23:29.532 "data_offset": 2048, 00:23:29.532 "data_size": 63488 00:23:29.532 }, 00:23:29.532 { 00:23:29.532 "name": "BaseBdev3", 00:23:29.532 "uuid": "bf1df803-c773-5696-b678-7318eb9bb103", 00:23:29.532 "is_configured": true, 00:23:29.532 "data_offset": 2048, 00:23:29.532 "data_size": 63488 00:23:29.532 }, 00:23:29.532 { 00:23:29.532 "name": "BaseBdev4", 00:23:29.532 "uuid": "b9388dad-93e7-5ce4-92bf-6629bec50550", 00:23:29.532 "is_configured": true, 00:23:29.532 "data_offset": 2048, 00:23:29.532 "data_size": 63488 00:23:29.532 } 00:23:29.532 ] 00:23:29.532 }' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # killprocess 4185034 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 4185034 ']' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 4185034 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4185034 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4185034' 00:23:29.532 killing process with pid 4185034 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 4185034 00:23:29.532 Received shutdown signal, test time was about 28.993639 seconds 00:23:29.532 00:23:29.532 Latency(us) 00:23:29.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.532 =================================================================================================================== 00:23:29.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.532 [2024-05-15 03:18:00.674841] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:29.532 [2024-05-15 03:18:00.674947] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.532 [2024-05-15 03:18:00.675005] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.532 [2024-05-15 03:18:00.675014] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19ef6c0 name raid_bdev1, state offline 00:23:29.532 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 4185034 00:23:29.790 [2024-05-15 03:18:00.712141] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:29.790 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@797 -- # return 0 00:23:29.790 00:23:29.790 real 0m34.879s 00:23:29.790 user 0m56.740s 00:23:29.790 sys 0m4.221s 00:23:29.790 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:29.791 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.791 ************************************ 00:23:29.791 END TEST raid_rebuild_test_sb_io 00:23:29.791 ************************************ 00:23:30.049 03:18:00 bdev_raid -- bdev/bdev_raid.sh@830 -- # '[' n == y ']' 00:23:30.049 03:18:00 bdev_raid -- bdev/bdev_raid.sh@842 -- # base_blocklen=4096 00:23:30.049 03:18:00 bdev_raid -- bdev/bdev_raid.sh@844 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:23:30.049 03:18:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:30.049 03:18:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:30.049 03:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.049 ************************************ 00:23:30.049 START TEST raid_state_function_test_sb_4k 00:23:30.049 ************************************ 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:30.049 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=4191078 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 4191078' 00:23:30.050 Process raid pid: 4191078 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 4191078 /var/tmp/spdk-raid.sock 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 4191078 ']' 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.050 03:18:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:30.050 [2024-05-15 03:18:01.079927] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:30.050 [2024-05-15 03:18:01.079979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.050 [2024-05-15 03:18:01.168401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.309 [2024-05-15 03:18:01.267884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.309 [2024-05-15 03:18:01.322088] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.309 [2024-05-15 03:18:01.322116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:31.244 [2024-05-15 03:18:02.272571] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:31.244 [2024-05-15 03:18:02.272611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:31.244 [2024-05-15 03:18:02.272620] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.244 [2024-05-15 03:18:02.272633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:31.244 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.245 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.503 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.503 "name": "Existed_Raid", 00:23:31.503 "uuid": "49e29062-53c9-4191-9397-739042542714", 00:23:31.503 "strip_size_kb": 0, 00:23:31.503 "state": "configuring", 00:23:31.503 "raid_level": "raid1", 00:23:31.503 "superblock": true, 00:23:31.503 "num_base_bdevs": 2, 00:23:31.503 "num_base_bdevs_discovered": 0, 00:23:31.503 "num_base_bdevs_operational": 2, 00:23:31.503 "base_bdevs_list": [ 00:23:31.503 { 00:23:31.503 "name": "BaseBdev1", 00:23:31.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.503 "is_configured": false, 00:23:31.503 "data_offset": 0, 00:23:31.503 "data_size": 0 00:23:31.503 }, 00:23:31.503 { 00:23:31.503 "name": "BaseBdev2", 00:23:31.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.504 "is_configured": false, 00:23:31.504 "data_offset": 0, 00:23:31.504 "data_size": 0 00:23:31.504 } 00:23:31.504 ] 00:23:31.504 }' 00:23:31.504 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.504 03:18:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:32.071 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:32.330 [2024-05-15 03:18:03.391406] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:32.330 [2024-05-15 03:18:03.391434] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a7dc0 name Existed_Raid, state configuring 00:23:32.330 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:32.588 [2024-05-15 03:18:03.648097] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:32.588 [2024-05-15 03:18:03.648119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:32.588 [2024-05-15 03:18:03.648127] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:32.588 [2024-05-15 03:18:03.648135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:32.588 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:23:32.846 [2024-05-15 03:18:03.910227] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:32.846 BaseBdev1 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:32.846 03:18:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:33.104 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:33.362 [ 00:23:33.362 { 00:23:33.362 "name": "BaseBdev1", 00:23:33.362 "aliases": [ 00:23:33.362 "905772f9-c28c-4e8e-99c6-6e27a8dd8960" 00:23:33.362 ], 00:23:33.362 "product_name": "Malloc disk", 00:23:33.362 "block_size": 4096, 00:23:33.362 "num_blocks": 8192, 00:23:33.362 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:33.362 "assigned_rate_limits": { 00:23:33.362 "rw_ios_per_sec": 0, 00:23:33.362 "rw_mbytes_per_sec": 0, 00:23:33.362 "r_mbytes_per_sec": 0, 00:23:33.362 "w_mbytes_per_sec": 0 00:23:33.362 }, 00:23:33.362 "claimed": true, 00:23:33.362 "claim_type": "exclusive_write", 00:23:33.362 "zoned": false, 00:23:33.362 "supported_io_types": { 00:23:33.362 "read": true, 00:23:33.362 "write": true, 00:23:33.362 "unmap": true, 00:23:33.362 "write_zeroes": true, 00:23:33.362 "flush": true, 00:23:33.362 "reset": true, 00:23:33.362 "compare": false, 00:23:33.362 "compare_and_write": false, 00:23:33.362 "abort": true, 00:23:33.362 "nvme_admin": false, 00:23:33.362 "nvme_io": false 00:23:33.362 }, 00:23:33.362 "memory_domains": [ 00:23:33.362 { 00:23:33.362 "dma_device_id": "system", 00:23:33.362 "dma_device_type": 1 00:23:33.362 }, 00:23:33.362 { 00:23:33.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.362 "dma_device_type": 2 00:23:33.362 } 00:23:33.362 ], 00:23:33.362 "driver_specific": {} 00:23:33.362 } 00:23:33.362 ] 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.362 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.621 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.621 "name": "Existed_Raid", 00:23:33.621 "uuid": "b3c070c9-b70a-4667-8bc2-a6f78ebb82ad", 00:23:33.621 "strip_size_kb": 0, 00:23:33.621 "state": "configuring", 00:23:33.621 "raid_level": "raid1", 00:23:33.621 "superblock": true, 00:23:33.621 "num_base_bdevs": 2, 00:23:33.621 "num_base_bdevs_discovered": 1, 00:23:33.621 "num_base_bdevs_operational": 2, 00:23:33.621 "base_bdevs_list": [ 00:23:33.621 { 00:23:33.621 "name": "BaseBdev1", 00:23:33.621 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:33.621 "is_configured": true, 00:23:33.621 "data_offset": 256, 00:23:33.621 "data_size": 7936 00:23:33.621 }, 00:23:33.621 { 00:23:33.621 "name": "BaseBdev2", 00:23:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.621 "is_configured": false, 00:23:33.621 "data_offset": 0, 00:23:33.621 "data_size": 0 00:23:33.621 } 00:23:33.621 ] 00:23:33.621 }' 00:23:33.621 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.621 03:18:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:34.188 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:34.447 [2024-05-15 03:18:05.510518] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:34.447 [2024-05-15 03:18:05.510555] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a8060 name Existed_Raid, state configuring 00:23:34.447 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:34.705 [2024-05-15 03:18:05.751188] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.705 [2024-05-15 03:18:05.752707] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:34.705 [2024-05-15 03:18:05.752738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.705 03:18:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.964 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.964 "name": "Existed_Raid", 00:23:34.964 "uuid": "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed", 00:23:34.964 "strip_size_kb": 0, 00:23:34.964 "state": "configuring", 00:23:34.964 "raid_level": "raid1", 00:23:34.964 "superblock": true, 00:23:34.964 "num_base_bdevs": 2, 00:23:34.964 "num_base_bdevs_discovered": 1, 00:23:34.964 "num_base_bdevs_operational": 2, 00:23:34.964 "base_bdevs_list": [ 00:23:34.964 { 00:23:34.964 "name": "BaseBdev1", 00:23:34.964 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:34.964 "is_configured": true, 00:23:34.964 "data_offset": 256, 00:23:34.964 "data_size": 7936 00:23:34.964 }, 00:23:34.964 { 00:23:34.964 "name": "BaseBdev2", 00:23:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.964 "is_configured": false, 00:23:34.964 "data_offset": 0, 00:23:34.964 "data_size": 0 00:23:34.964 } 00:23:34.964 ] 00:23:34.964 }' 00:23:34.964 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.964 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:35.531 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:23:35.883 [2024-05-15 03:18:06.861629] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.883 [2024-05-15 03:18:06.861778] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x19a76b0 00:23:35.883 [2024-05-15 03:18:06.861792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:35.883 [2024-05-15 03:18:06.861985] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19a7c70 00:23:35.883 [2024-05-15 03:18:06.862116] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19a76b0 00:23:35.884 [2024-05-15 03:18:06.862125] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19a76b0 00:23:35.884 [2024-05-15 03:18:06.862221] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.884 BaseBdev2 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:35.884 03:18:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.142 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:36.401 [ 00:23:36.401 { 00:23:36.401 "name": "BaseBdev2", 00:23:36.401 "aliases": [ 00:23:36.401 "1eafb307-ff8a-4606-ae43-15d21c650910" 00:23:36.401 ], 00:23:36.401 "product_name": "Malloc disk", 00:23:36.401 "block_size": 4096, 00:23:36.401 "num_blocks": 8192, 00:23:36.401 "uuid": "1eafb307-ff8a-4606-ae43-15d21c650910", 00:23:36.401 "assigned_rate_limits": { 00:23:36.401 "rw_ios_per_sec": 0, 00:23:36.401 "rw_mbytes_per_sec": 0, 00:23:36.401 "r_mbytes_per_sec": 0, 00:23:36.401 "w_mbytes_per_sec": 0 00:23:36.401 }, 00:23:36.401 "claimed": true, 00:23:36.401 "claim_type": "exclusive_write", 00:23:36.401 "zoned": false, 00:23:36.401 "supported_io_types": { 00:23:36.401 "read": true, 00:23:36.401 "write": true, 00:23:36.401 "unmap": true, 00:23:36.401 "write_zeroes": true, 00:23:36.401 "flush": true, 00:23:36.401 "reset": true, 00:23:36.401 "compare": false, 00:23:36.401 "compare_and_write": false, 00:23:36.401 "abort": true, 00:23:36.401 "nvme_admin": false, 00:23:36.401 "nvme_io": false 00:23:36.401 }, 00:23:36.401 "memory_domains": [ 00:23:36.401 { 00:23:36.401 "dma_device_id": "system", 00:23:36.401 "dma_device_type": 1 00:23:36.401 }, 00:23:36.401 { 00:23:36.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.401 "dma_device_type": 2 00:23:36.401 } 00:23:36.401 ], 00:23:36.401 "driver_specific": {} 00:23:36.401 } 00:23:36.401 ] 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.401 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.660 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.660 "name": "Existed_Raid", 00:23:36.660 "uuid": "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed", 00:23:36.660 "strip_size_kb": 0, 00:23:36.660 "state": "online", 00:23:36.660 "raid_level": "raid1", 00:23:36.660 "superblock": true, 00:23:36.660 "num_base_bdevs": 2, 00:23:36.660 "num_base_bdevs_discovered": 2, 00:23:36.660 "num_base_bdevs_operational": 2, 00:23:36.660 "base_bdevs_list": [ 00:23:36.660 { 00:23:36.660 "name": "BaseBdev1", 00:23:36.660 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:36.660 "is_configured": true, 00:23:36.660 "data_offset": 256, 00:23:36.660 "data_size": 7936 00:23:36.660 }, 00:23:36.660 { 00:23:36.660 "name": "BaseBdev2", 00:23:36.660 "uuid": "1eafb307-ff8a-4606-ae43-15d21c650910", 00:23:36.660 "is_configured": true, 00:23:36.660 "data_offset": 256, 00:23:36.660 "data_size": 7936 00:23:36.660 } 00:23:36.660 ] 00:23:36.660 }' 00:23:36.660 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.660 03:18:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:37.227 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:37.486 [2024-05-15 03:18:08.482218] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:37.486 "name": "Existed_Raid", 00:23:37.486 "aliases": [ 00:23:37.486 "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed" 00:23:37.486 ], 00:23:37.486 "product_name": "Raid Volume", 00:23:37.486 "block_size": 4096, 00:23:37.486 "num_blocks": 7936, 00:23:37.486 "uuid": "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed", 00:23:37.486 "assigned_rate_limits": { 00:23:37.486 "rw_ios_per_sec": 0, 00:23:37.486 "rw_mbytes_per_sec": 0, 00:23:37.486 "r_mbytes_per_sec": 0, 00:23:37.486 "w_mbytes_per_sec": 0 00:23:37.486 }, 00:23:37.486 "claimed": false, 00:23:37.486 "zoned": false, 00:23:37.486 "supported_io_types": { 00:23:37.486 "read": true, 00:23:37.486 "write": true, 00:23:37.486 "unmap": false, 00:23:37.486 "write_zeroes": true, 00:23:37.486 "flush": false, 00:23:37.486 "reset": true, 00:23:37.486 "compare": false, 00:23:37.486 "compare_and_write": false, 00:23:37.486 "abort": false, 00:23:37.486 "nvme_admin": false, 00:23:37.486 "nvme_io": false 00:23:37.486 }, 00:23:37.486 "memory_domains": [ 00:23:37.486 { 00:23:37.486 "dma_device_id": "system", 00:23:37.486 "dma_device_type": 1 00:23:37.486 }, 00:23:37.486 { 00:23:37.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.486 "dma_device_type": 2 00:23:37.486 }, 00:23:37.486 { 00:23:37.486 "dma_device_id": "system", 00:23:37.486 "dma_device_type": 1 00:23:37.486 }, 00:23:37.486 { 00:23:37.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.486 "dma_device_type": 2 00:23:37.486 } 00:23:37.486 ], 00:23:37.486 "driver_specific": { 00:23:37.486 "raid": { 00:23:37.486 "uuid": "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed", 00:23:37.486 "strip_size_kb": 0, 00:23:37.486 "state": "online", 00:23:37.486 "raid_level": "raid1", 00:23:37.486 "superblock": true, 00:23:37.486 "num_base_bdevs": 2, 00:23:37.486 "num_base_bdevs_discovered": 2, 00:23:37.486 "num_base_bdevs_operational": 2, 00:23:37.486 "base_bdevs_list": [ 00:23:37.486 { 00:23:37.486 "name": "BaseBdev1", 00:23:37.486 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:37.486 "is_configured": true, 00:23:37.486 "data_offset": 256, 00:23:37.486 "data_size": 7936 00:23:37.486 }, 00:23:37.486 { 00:23:37.486 "name": "BaseBdev2", 00:23:37.486 "uuid": "1eafb307-ff8a-4606-ae43-15d21c650910", 00:23:37.486 "is_configured": true, 00:23:37.486 "data_offset": 256, 00:23:37.486 "data_size": 7936 00:23:37.486 } 00:23:37.486 ] 00:23:37.486 } 00:23:37.486 } 00:23:37.486 }' 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:23:37.486 BaseBdev2' 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:37.486 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:37.744 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:37.745 "name": "BaseBdev1", 00:23:37.745 "aliases": [ 00:23:37.745 "905772f9-c28c-4e8e-99c6-6e27a8dd8960" 00:23:37.745 ], 00:23:37.745 "product_name": "Malloc disk", 00:23:37.745 "block_size": 4096, 00:23:37.745 "num_blocks": 8192, 00:23:37.745 "uuid": "905772f9-c28c-4e8e-99c6-6e27a8dd8960", 00:23:37.745 "assigned_rate_limits": { 00:23:37.745 "rw_ios_per_sec": 0, 00:23:37.745 "rw_mbytes_per_sec": 0, 00:23:37.745 "r_mbytes_per_sec": 0, 00:23:37.745 "w_mbytes_per_sec": 0 00:23:37.745 }, 00:23:37.745 "claimed": true, 00:23:37.745 "claim_type": "exclusive_write", 00:23:37.745 "zoned": false, 00:23:37.745 "supported_io_types": { 00:23:37.745 "read": true, 00:23:37.745 "write": true, 00:23:37.745 "unmap": true, 00:23:37.745 "write_zeroes": true, 00:23:37.745 "flush": true, 00:23:37.745 "reset": true, 00:23:37.745 "compare": false, 00:23:37.745 "compare_and_write": false, 00:23:37.745 "abort": true, 00:23:37.745 "nvme_admin": false, 00:23:37.745 "nvme_io": false 00:23:37.745 }, 00:23:37.745 "memory_domains": [ 00:23:37.745 { 00:23:37.745 "dma_device_id": "system", 00:23:37.745 "dma_device_type": 1 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.745 "dma_device_type": 2 00:23:37.745 } 00:23:37.745 ], 00:23:37.745 "driver_specific": {} 00:23:37.745 }' 00:23:37.745 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:37.745 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:37.745 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:37.745 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:38.003 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:38.003 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:38.003 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:38.003 03:18:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:38.003 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:38.263 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:38.263 "name": "BaseBdev2", 00:23:38.263 "aliases": [ 00:23:38.263 "1eafb307-ff8a-4606-ae43-15d21c650910" 00:23:38.263 ], 00:23:38.263 "product_name": "Malloc disk", 00:23:38.263 "block_size": 4096, 00:23:38.263 "num_blocks": 8192, 00:23:38.263 "uuid": "1eafb307-ff8a-4606-ae43-15d21c650910", 00:23:38.263 "assigned_rate_limits": { 00:23:38.263 "rw_ios_per_sec": 0, 00:23:38.263 "rw_mbytes_per_sec": 0, 00:23:38.263 "r_mbytes_per_sec": 0, 00:23:38.263 "w_mbytes_per_sec": 0 00:23:38.263 }, 00:23:38.263 "claimed": true, 00:23:38.264 "claim_type": "exclusive_write", 00:23:38.264 "zoned": false, 00:23:38.264 "supported_io_types": { 00:23:38.264 "read": true, 00:23:38.264 "write": true, 00:23:38.264 "unmap": true, 00:23:38.264 "write_zeroes": true, 00:23:38.264 "flush": true, 00:23:38.264 "reset": true, 00:23:38.264 "compare": false, 00:23:38.264 "compare_and_write": false, 00:23:38.264 "abort": true, 00:23:38.264 "nvme_admin": false, 00:23:38.264 "nvme_io": false 00:23:38.264 }, 00:23:38.264 "memory_domains": [ 00:23:38.264 { 00:23:38.264 "dma_device_id": "system", 00:23:38.264 "dma_device_type": 1 00:23:38.264 }, 00:23:38.264 { 00:23:38.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.264 "dma_device_type": 2 00:23:38.264 } 00:23:38.264 ], 00:23:38.264 "driver_specific": {} 00:23:38.264 }' 00:23:38.264 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:38.264 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:38.528 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:38.529 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:38.529 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:38.529 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:38.529 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:38.529 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:38.787 [2024-05-15 03:18:09.905814] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.787 03:18:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.045 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.045 "name": "Existed_Raid", 00:23:39.045 "uuid": "0fe722ad-4b1b-44d3-8528-3077e4dfc3ed", 00:23:39.045 "strip_size_kb": 0, 00:23:39.045 "state": "online", 00:23:39.045 "raid_level": "raid1", 00:23:39.045 "superblock": true, 00:23:39.045 "num_base_bdevs": 2, 00:23:39.045 "num_base_bdevs_discovered": 1, 00:23:39.045 "num_base_bdevs_operational": 1, 00:23:39.045 "base_bdevs_list": [ 00:23:39.045 { 00:23:39.045 "name": null, 00:23:39.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.045 "is_configured": false, 00:23:39.045 "data_offset": 256, 00:23:39.045 "data_size": 7936 00:23:39.045 }, 00:23:39.045 { 00:23:39.045 "name": "BaseBdev2", 00:23:39.045 "uuid": "1eafb307-ff8a-4606-ae43-15d21c650910", 00:23:39.045 "is_configured": true, 00:23:39.045 "data_offset": 256, 00:23:39.045 "data_size": 7936 00:23:39.045 } 00:23:39.045 ] 00:23:39.045 }' 00:23:39.045 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.045 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:39.981 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:39.981 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:39.981 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.981 03:18:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:39.981 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:39.981 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:39.981 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:40.239 [2024-05-15 03:18:11.286649] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:40.239 [2024-05-15 03:18:11.286722] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.239 [2024-05-15 03:18:11.297583] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.240 [2024-05-15 03:18:11.297644] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.240 [2024-05-15 03:18:11.297654] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a76b0 name Existed_Raid, state offline 00:23:40.240 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:40.240 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:40.240 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.240 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 4191078 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 4191078 ']' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 4191078 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4191078 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4191078' 00:23:40.498 killing process with pid 4191078 00:23:40.498 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 4191078 00:23:40.498 [2024-05-15 03:18:11.514350] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.499 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 4191078 00:23:40.499 [2024-05-15 03:18:11.515212] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.757 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:23:40.757 00:23:40.757 real 0m10.721s 00:23:40.757 user 0m19.476s 00:23:40.757 sys 0m1.552s 00:23:40.757 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:40.757 03:18:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.757 ************************************ 00:23:40.757 END TEST raid_state_function_test_sb_4k 00:23:40.757 ************************************ 00:23:40.757 03:18:11 bdev_raid -- bdev/bdev_raid.sh@845 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:23:40.757 03:18:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:40.757 03:18:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:40.758 03:18:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:40.758 ************************************ 00:23:40.758 START TEST raid_superblock_test_4k 00:23:40.758 ************************************ 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=4193504 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 4193504 /var/tmp/spdk-raid.sock 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 4193504 ']' 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:40.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.758 03:18:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.758 [2024-05-15 03:18:11.861025] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:40.758 [2024-05-15 03:18:11.861077] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193504 ] 00:23:41.017 [2024-05-15 03:18:11.960329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.017 [2024-05-15 03:18:12.054359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.017 [2024-05-15 03:18:12.124186] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.017 [2024-05-15 03:18:12.124221] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:23:41.953 malloc1 00:23:41.953 03:18:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:42.211 [2024-05-15 03:18:13.205934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:42.211 [2024-05-15 03:18:13.205979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.211 [2024-05-15 03:18:13.205999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f77a00 00:23:42.211 [2024-05-15 03:18:13.206009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.211 [2024-05-15 03:18:13.207700] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.211 [2024-05-15 03:18:13.207726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:42.211 pt1 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:42.211 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:23:42.470 malloc2 00:23:42.470 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:42.470 [2024-05-15 03:18:13.627739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:42.470 [2024-05-15 03:18:13.627780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.470 [2024-05-15 03:18:13.627801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f785f0 00:23:42.470 [2024-05-15 03:18:13.627816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.728 [2024-05-15 03:18:13.629359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.728 [2024-05-15 03:18:13.629386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:42.728 pt2 00:23:42.728 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:42.728 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:42.728 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:42.728 [2024-05-15 03:18:13.876424] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:42.728 [2024-05-15 03:18:13.877765] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:42.728 [2024-05-15 03:18:13.877925] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x211d760 00:23:42.728 [2024-05-15 03:18:13.877937] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:42.728 [2024-05-15 03:18:13.878137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x212a560 00:23:42.728 [2024-05-15 03:18:13.878292] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x211d760 00:23:42.729 [2024-05-15 03:18:13.878301] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x211d760 00:23:42.729 [2024-05-15 03:18:13.878399] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.987 03:18:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.246 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.246 "name": "raid_bdev1", 00:23:43.246 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:43.246 "strip_size_kb": 0, 00:23:43.246 "state": "online", 00:23:43.246 "raid_level": "raid1", 00:23:43.246 "superblock": true, 00:23:43.246 "num_base_bdevs": 2, 00:23:43.246 "num_base_bdevs_discovered": 2, 00:23:43.246 "num_base_bdevs_operational": 2, 00:23:43.246 "base_bdevs_list": [ 00:23:43.246 { 00:23:43.246 "name": "pt1", 00:23:43.246 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:43.246 "is_configured": true, 00:23:43.246 "data_offset": 256, 00:23:43.246 "data_size": 7936 00:23:43.246 }, 00:23:43.246 { 00:23:43.246 "name": "pt2", 00:23:43.246 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:43.246 "is_configured": true, 00:23:43.246 "data_offset": 256, 00:23:43.246 "data_size": 7936 00:23:43.246 } 00:23:43.246 ] 00:23:43.246 }' 00:23:43.246 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.246 03:18:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:43.812 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:43.812 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:43.812 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:43.812 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:43.812 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:43.813 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:23:43.813 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:43.813 03:18:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:44.071 [2024-05-15 03:18:14.995597] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:44.071 "name": "raid_bdev1", 00:23:44.071 "aliases": [ 00:23:44.071 "c4825431-b64e-492a-abdd-3e11b996d3e6" 00:23:44.071 ], 00:23:44.071 "product_name": "Raid Volume", 00:23:44.071 "block_size": 4096, 00:23:44.071 "num_blocks": 7936, 00:23:44.071 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:44.071 "assigned_rate_limits": { 00:23:44.071 "rw_ios_per_sec": 0, 00:23:44.071 "rw_mbytes_per_sec": 0, 00:23:44.071 "r_mbytes_per_sec": 0, 00:23:44.071 "w_mbytes_per_sec": 0 00:23:44.071 }, 00:23:44.071 "claimed": false, 00:23:44.071 "zoned": false, 00:23:44.071 "supported_io_types": { 00:23:44.071 "read": true, 00:23:44.071 "write": true, 00:23:44.071 "unmap": false, 00:23:44.071 "write_zeroes": true, 00:23:44.071 "flush": false, 00:23:44.071 "reset": true, 00:23:44.071 "compare": false, 00:23:44.071 "compare_and_write": false, 00:23:44.071 "abort": false, 00:23:44.071 "nvme_admin": false, 00:23:44.071 "nvme_io": false 00:23:44.071 }, 00:23:44.071 "memory_domains": [ 00:23:44.071 { 00:23:44.071 "dma_device_id": "system", 00:23:44.071 "dma_device_type": 1 00:23:44.071 }, 00:23:44.071 { 00:23:44.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.071 "dma_device_type": 2 00:23:44.071 }, 00:23:44.071 { 00:23:44.071 "dma_device_id": "system", 00:23:44.071 "dma_device_type": 1 00:23:44.071 }, 00:23:44.071 { 00:23:44.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.071 "dma_device_type": 2 00:23:44.071 } 00:23:44.071 ], 00:23:44.071 "driver_specific": { 00:23:44.071 "raid": { 00:23:44.071 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:44.071 "strip_size_kb": 0, 00:23:44.071 "state": "online", 00:23:44.071 "raid_level": "raid1", 00:23:44.071 "superblock": true, 00:23:44.071 "num_base_bdevs": 2, 00:23:44.071 "num_base_bdevs_discovered": 2, 00:23:44.071 "num_base_bdevs_operational": 2, 00:23:44.071 "base_bdevs_list": [ 00:23:44.071 { 00:23:44.071 "name": "pt1", 00:23:44.071 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:44.071 "is_configured": true, 00:23:44.071 "data_offset": 256, 00:23:44.071 "data_size": 7936 00:23:44.071 }, 00:23:44.071 { 00:23:44.071 "name": "pt2", 00:23:44.071 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:44.071 "is_configured": true, 00:23:44.071 "data_offset": 256, 00:23:44.071 "data_size": 7936 00:23:44.071 } 00:23:44.071 ] 00:23:44.071 } 00:23:44.071 } 00:23:44.071 }' 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:44.071 pt2' 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:44.071 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:44.330 "name": "pt1", 00:23:44.330 "aliases": [ 00:23:44.330 "4b66aa39-3604-525d-9341-c04e30da7029" 00:23:44.330 ], 00:23:44.330 "product_name": "passthru", 00:23:44.330 "block_size": 4096, 00:23:44.330 "num_blocks": 8192, 00:23:44.330 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:44.330 "assigned_rate_limits": { 00:23:44.330 "rw_ios_per_sec": 0, 00:23:44.330 "rw_mbytes_per_sec": 0, 00:23:44.330 "r_mbytes_per_sec": 0, 00:23:44.330 "w_mbytes_per_sec": 0 00:23:44.330 }, 00:23:44.330 "claimed": true, 00:23:44.330 "claim_type": "exclusive_write", 00:23:44.330 "zoned": false, 00:23:44.330 "supported_io_types": { 00:23:44.330 "read": true, 00:23:44.330 "write": true, 00:23:44.330 "unmap": true, 00:23:44.330 "write_zeroes": true, 00:23:44.330 "flush": true, 00:23:44.330 "reset": true, 00:23:44.330 "compare": false, 00:23:44.330 "compare_and_write": false, 00:23:44.330 "abort": true, 00:23:44.330 "nvme_admin": false, 00:23:44.330 "nvme_io": false 00:23:44.330 }, 00:23:44.330 "memory_domains": [ 00:23:44.330 { 00:23:44.330 "dma_device_id": "system", 00:23:44.330 "dma_device_type": 1 00:23:44.330 }, 00:23:44.330 { 00:23:44.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.330 "dma_device_type": 2 00:23:44.330 } 00:23:44.330 ], 00:23:44.330 "driver_specific": { 00:23:44.330 "passthru": { 00:23:44.330 "name": "pt1", 00:23:44.330 "base_bdev_name": "malloc1" 00:23:44.330 } 00:23:44.330 } 00:23:44.330 }' 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:44.330 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:44.589 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:44.849 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:44.849 "name": "pt2", 00:23:44.849 "aliases": [ 00:23:44.849 "948942c3-999b-5890-af8b-549c7aa3c5f7" 00:23:44.849 ], 00:23:44.849 "product_name": "passthru", 00:23:44.849 "block_size": 4096, 00:23:44.849 "num_blocks": 8192, 00:23:44.849 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:44.849 "assigned_rate_limits": { 00:23:44.849 "rw_ios_per_sec": 0, 00:23:44.849 "rw_mbytes_per_sec": 0, 00:23:44.849 "r_mbytes_per_sec": 0, 00:23:44.849 "w_mbytes_per_sec": 0 00:23:44.849 }, 00:23:44.849 "claimed": true, 00:23:44.849 "claim_type": "exclusive_write", 00:23:44.849 "zoned": false, 00:23:44.849 "supported_io_types": { 00:23:44.849 "read": true, 00:23:44.849 "write": true, 00:23:44.849 "unmap": true, 00:23:44.849 "write_zeroes": true, 00:23:44.849 "flush": true, 00:23:44.849 "reset": true, 00:23:44.849 "compare": false, 00:23:44.849 "compare_and_write": false, 00:23:44.849 "abort": true, 00:23:44.849 "nvme_admin": false, 00:23:44.849 "nvme_io": false 00:23:44.849 }, 00:23:44.849 "memory_domains": [ 00:23:44.849 { 00:23:44.849 "dma_device_id": "system", 00:23:44.849 "dma_device_type": 1 00:23:44.849 }, 00:23:44.849 { 00:23:44.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.849 "dma_device_type": 2 00:23:44.849 } 00:23:44.849 ], 00:23:44.849 "driver_specific": { 00:23:44.849 "passthru": { 00:23:44.849 "name": "pt2", 00:23:44.849 "base_bdev_name": "malloc2" 00:23:44.849 } 00:23:44.849 } 00:23:44.849 }' 00:23:44.849 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:44.849 03:18:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:45.108 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:45.366 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:45.366 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:45.366 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:45.625 [2024-05-15 03:18:16.527684] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.625 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c4825431-b64e-492a-abdd-3e11b996d3e6 00:23:45.625 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c4825431-b64e-492a-abdd-3e11b996d3e6 ']' 00:23:45.625 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:45.625 [2024-05-15 03:18:16.691909] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:45.625 [2024-05-15 03:18:16.691926] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:45.625 [2024-05-15 03:18:16.691974] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:45.625 [2024-05-15 03:18:16.692028] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:45.625 [2024-05-15 03:18:16.692037] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x211d760 name raid_bdev1, state offline 00:23:45.625 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.625 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:45.883 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:45.883 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:45.883 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:45.883 03:18:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:46.142 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:46.142 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:46.400 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:46.400 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:46.660 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:46.919 [2024-05-15 03:18:17.854953] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:46.919 [2024-05-15 03:18:17.856369] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:46.919 [2024-05-15 03:18:17.856424] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:46.919 [2024-05-15 03:18:17.856460] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:46.919 [2024-05-15 03:18:17.856476] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.919 [2024-05-15 03:18:17.856483] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x211f190 name raid_bdev1, state configuring 00:23:46.919 request: 00:23:46.919 { 00:23:46.919 "name": "raid_bdev1", 00:23:46.919 "raid_level": "raid1", 00:23:46.919 "base_bdevs": [ 00:23:46.919 "malloc1", 00:23:46.919 "malloc2" 00:23:46.919 ], 00:23:46.919 "superblock": false, 00:23:46.919 "method": "bdev_raid_create", 00:23:46.919 "req_id": 1 00:23:46.919 } 00:23:46.919 Got JSON-RPC error response 00:23:46.919 response: 00:23:46.919 { 00:23:46.919 "code": -17, 00:23:46.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:46.919 } 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:46.919 03:18:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:47.178 [2024-05-15 03:18:18.268005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:47.178 [2024-05-15 03:18:18.268046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.178 [2024-05-15 03:18:18.268079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x211b320 00:23:47.178 [2024-05-15 03:18:18.268090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.178 [2024-05-15 03:18:18.269764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.178 [2024-05-15 03:18:18.269790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:47.178 [2024-05-15 03:18:18.269861] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:47.178 [2024-05-15 03:18:18.269884] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:47.178 pt1 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.178 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.437 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.437 "name": "raid_bdev1", 00:23:47.437 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:47.437 "strip_size_kb": 0, 00:23:47.437 "state": "configuring", 00:23:47.437 "raid_level": "raid1", 00:23:47.437 "superblock": true, 00:23:47.437 "num_base_bdevs": 2, 00:23:47.437 "num_base_bdevs_discovered": 1, 00:23:47.437 "num_base_bdevs_operational": 2, 00:23:47.437 "base_bdevs_list": [ 00:23:47.437 { 00:23:47.437 "name": "pt1", 00:23:47.437 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:47.437 "is_configured": true, 00:23:47.437 "data_offset": 256, 00:23:47.437 "data_size": 7936 00:23:47.437 }, 00:23:47.437 { 00:23:47.437 "name": null, 00:23:47.437 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:47.437 "is_configured": false, 00:23:47.437 "data_offset": 256, 00:23:47.437 "data_size": 7936 00:23:47.437 } 00:23:47.437 ] 00:23:47.437 }' 00:23:47.437 03:18:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.437 03:18:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:48.005 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:48.005 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:48.005 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:48.005 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:48.265 [2024-05-15 03:18:19.366972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:48.265 [2024-05-15 03:18:19.367015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.265 [2024-05-15 03:18:19.367033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f77c30 00:23:48.265 [2024-05-15 03:18:19.367042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.265 [2024-05-15 03:18:19.367376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.265 [2024-05-15 03:18:19.367390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:48.265 [2024-05-15 03:18:19.367445] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:48.265 [2024-05-15 03:18:19.367461] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:48.265 [2024-05-15 03:18:19.367556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x211fcc0 00:23:48.265 [2024-05-15 03:18:19.367565] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:48.265 [2024-05-15 03:18:19.367738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f705d0 00:23:48.265 [2024-05-15 03:18:19.367881] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x211fcc0 00:23:48.265 [2024-05-15 03:18:19.367890] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x211fcc0 00:23:48.265 [2024-05-15 03:18:19.367990] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.265 pt2 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.265 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.524 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.524 "name": "raid_bdev1", 00:23:48.524 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:48.524 "strip_size_kb": 0, 00:23:48.524 "state": "online", 00:23:48.524 "raid_level": "raid1", 00:23:48.524 "superblock": true, 00:23:48.524 "num_base_bdevs": 2, 00:23:48.524 "num_base_bdevs_discovered": 2, 00:23:48.524 "num_base_bdevs_operational": 2, 00:23:48.524 "base_bdevs_list": [ 00:23:48.524 { 00:23:48.524 "name": "pt1", 00:23:48.524 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:48.524 "is_configured": true, 00:23:48.524 "data_offset": 256, 00:23:48.524 "data_size": 7936 00:23:48.524 }, 00:23:48.524 { 00:23:48.524 "name": "pt2", 00:23:48.524 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:48.524 "is_configured": true, 00:23:48.524 "data_offset": 256, 00:23:48.524 "data_size": 7936 00:23:48.524 } 00:23:48.524 ] 00:23:48.524 }' 00:23:48.524 03:18:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.524 03:18:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:49.092 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:49.351 [2024-05-15 03:18:20.422034] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:49.351 "name": "raid_bdev1", 00:23:49.351 "aliases": [ 00:23:49.351 "c4825431-b64e-492a-abdd-3e11b996d3e6" 00:23:49.351 ], 00:23:49.351 "product_name": "Raid Volume", 00:23:49.351 "block_size": 4096, 00:23:49.351 "num_blocks": 7936, 00:23:49.351 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:49.351 "assigned_rate_limits": { 00:23:49.351 "rw_ios_per_sec": 0, 00:23:49.351 "rw_mbytes_per_sec": 0, 00:23:49.351 "r_mbytes_per_sec": 0, 00:23:49.351 "w_mbytes_per_sec": 0 00:23:49.351 }, 00:23:49.351 "claimed": false, 00:23:49.351 "zoned": false, 00:23:49.351 "supported_io_types": { 00:23:49.351 "read": true, 00:23:49.351 "write": true, 00:23:49.351 "unmap": false, 00:23:49.351 "write_zeroes": true, 00:23:49.351 "flush": false, 00:23:49.351 "reset": true, 00:23:49.351 "compare": false, 00:23:49.351 "compare_and_write": false, 00:23:49.351 "abort": false, 00:23:49.351 "nvme_admin": false, 00:23:49.351 "nvme_io": false 00:23:49.351 }, 00:23:49.351 "memory_domains": [ 00:23:49.351 { 00:23:49.351 "dma_device_id": "system", 00:23:49.351 "dma_device_type": 1 00:23:49.351 }, 00:23:49.351 { 00:23:49.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.351 "dma_device_type": 2 00:23:49.351 }, 00:23:49.351 { 00:23:49.351 "dma_device_id": "system", 00:23:49.351 "dma_device_type": 1 00:23:49.351 }, 00:23:49.351 { 00:23:49.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.351 "dma_device_type": 2 00:23:49.351 } 00:23:49.351 ], 00:23:49.351 "driver_specific": { 00:23:49.351 "raid": { 00:23:49.351 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:49.351 "strip_size_kb": 0, 00:23:49.351 "state": "online", 00:23:49.351 "raid_level": "raid1", 00:23:49.351 "superblock": true, 00:23:49.351 "num_base_bdevs": 2, 00:23:49.351 "num_base_bdevs_discovered": 2, 00:23:49.351 "num_base_bdevs_operational": 2, 00:23:49.351 "base_bdevs_list": [ 00:23:49.351 { 00:23:49.351 "name": "pt1", 00:23:49.351 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:49.351 "is_configured": true, 00:23:49.351 "data_offset": 256, 00:23:49.351 "data_size": 7936 00:23:49.351 }, 00:23:49.351 { 00:23:49.351 "name": "pt2", 00:23:49.351 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:49.351 "is_configured": true, 00:23:49.351 "data_offset": 256, 00:23:49.351 "data_size": 7936 00:23:49.351 } 00:23:49.351 ] 00:23:49.351 } 00:23:49.351 } 00:23:49.351 }' 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:49.351 pt2' 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:49.351 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:49.610 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:49.610 "name": "pt1", 00:23:49.610 "aliases": [ 00:23:49.610 "4b66aa39-3604-525d-9341-c04e30da7029" 00:23:49.610 ], 00:23:49.610 "product_name": "passthru", 00:23:49.610 "block_size": 4096, 00:23:49.610 "num_blocks": 8192, 00:23:49.610 "uuid": "4b66aa39-3604-525d-9341-c04e30da7029", 00:23:49.610 "assigned_rate_limits": { 00:23:49.610 "rw_ios_per_sec": 0, 00:23:49.610 "rw_mbytes_per_sec": 0, 00:23:49.610 "r_mbytes_per_sec": 0, 00:23:49.610 "w_mbytes_per_sec": 0 00:23:49.610 }, 00:23:49.610 "claimed": true, 00:23:49.610 "claim_type": "exclusive_write", 00:23:49.610 "zoned": false, 00:23:49.610 "supported_io_types": { 00:23:49.610 "read": true, 00:23:49.610 "write": true, 00:23:49.610 "unmap": true, 00:23:49.610 "write_zeroes": true, 00:23:49.610 "flush": true, 00:23:49.610 "reset": true, 00:23:49.610 "compare": false, 00:23:49.610 "compare_and_write": false, 00:23:49.610 "abort": true, 00:23:49.610 "nvme_admin": false, 00:23:49.610 "nvme_io": false 00:23:49.610 }, 00:23:49.610 "memory_domains": [ 00:23:49.610 { 00:23:49.610 "dma_device_id": "system", 00:23:49.610 "dma_device_type": 1 00:23:49.610 }, 00:23:49.610 { 00:23:49.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.610 "dma_device_type": 2 00:23:49.610 } 00:23:49.610 ], 00:23:49.610 "driver_specific": { 00:23:49.610 "passthru": { 00:23:49.610 "name": "pt1", 00:23:49.610 "base_bdev_name": "malloc1" 00:23:49.610 } 00:23:49.610 } 00:23:49.610 }' 00:23:49.610 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:49.869 03:18:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:49.869 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:49.869 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:50.128 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:50.128 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:50.128 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:50.128 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:50.128 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:50.386 "name": "pt2", 00:23:50.386 "aliases": [ 00:23:50.386 "948942c3-999b-5890-af8b-549c7aa3c5f7" 00:23:50.386 ], 00:23:50.386 "product_name": "passthru", 00:23:50.386 "block_size": 4096, 00:23:50.386 "num_blocks": 8192, 00:23:50.386 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:50.386 "assigned_rate_limits": { 00:23:50.386 "rw_ios_per_sec": 0, 00:23:50.386 "rw_mbytes_per_sec": 0, 00:23:50.386 "r_mbytes_per_sec": 0, 00:23:50.386 "w_mbytes_per_sec": 0 00:23:50.386 }, 00:23:50.386 "claimed": true, 00:23:50.386 "claim_type": "exclusive_write", 00:23:50.386 "zoned": false, 00:23:50.386 "supported_io_types": { 00:23:50.386 "read": true, 00:23:50.386 "write": true, 00:23:50.386 "unmap": true, 00:23:50.386 "write_zeroes": true, 00:23:50.386 "flush": true, 00:23:50.386 "reset": true, 00:23:50.386 "compare": false, 00:23:50.386 "compare_and_write": false, 00:23:50.386 "abort": true, 00:23:50.386 "nvme_admin": false, 00:23:50.386 "nvme_io": false 00:23:50.386 }, 00:23:50.386 "memory_domains": [ 00:23:50.386 { 00:23:50.386 "dma_device_id": "system", 00:23:50.386 "dma_device_type": 1 00:23:50.386 }, 00:23:50.386 { 00:23:50.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.386 "dma_device_type": 2 00:23:50.386 } 00:23:50.386 ], 00:23:50.386 "driver_specific": { 00:23:50.386 "passthru": { 00:23:50.386 "name": "pt2", 00:23:50.386 "base_bdev_name": "malloc2" 00:23:50.386 } 00:23:50.386 } 00:23:50.386 }' 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:50.386 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:50.645 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:50.903 [2024-05-15 03:18:21.970169] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.903 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c4825431-b64e-492a-abdd-3e11b996d3e6 '!=' c4825431-b64e-492a-abdd-3e11b996d3e6 ']' 00:23:50.903 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:50.903 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:50.903 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:23:50.904 03:18:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:51.162 [2024-05-15 03:18:22.134398] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.162 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.420 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.420 "name": "raid_bdev1", 00:23:51.420 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:51.420 "strip_size_kb": 0, 00:23:51.420 "state": "online", 00:23:51.420 "raid_level": "raid1", 00:23:51.420 "superblock": true, 00:23:51.420 "num_base_bdevs": 2, 00:23:51.420 "num_base_bdevs_discovered": 1, 00:23:51.420 "num_base_bdevs_operational": 1, 00:23:51.420 "base_bdevs_list": [ 00:23:51.420 { 00:23:51.420 "name": null, 00:23:51.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.420 "is_configured": false, 00:23:51.420 "data_offset": 256, 00:23:51.420 "data_size": 7936 00:23:51.420 }, 00:23:51.420 { 00:23:51.420 "name": "pt2", 00:23:51.420 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:51.420 "is_configured": true, 00:23:51.420 "data_offset": 256, 00:23:51.420 "data_size": 7936 00:23:51.420 } 00:23:51.420 ] 00:23:51.420 }' 00:23:51.420 03:18:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.420 03:18:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:51.988 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:52.246 [2024-05-15 03:18:23.261391] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.246 [2024-05-15 03:18:23.261416] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.246 [2024-05-15 03:18:23.261464] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.246 [2024-05-15 03:18:23.261505] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.246 [2024-05-15 03:18:23.261514] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x211fcc0 name raid_bdev1, state offline 00:23:52.246 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.246 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:52.504 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:52.504 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:52.504 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:52.504 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:52.504 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:23:52.762 03:18:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:53.020 [2024-05-15 03:18:24.035423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:53.020 [2024-05-15 03:18:24.035469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.020 [2024-05-15 03:18:24.035489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f77e60 00:23:53.020 [2024-05-15 03:18:24.035498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.020 [2024-05-15 03:18:24.037166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.020 [2024-05-15 03:18:24.037190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:53.020 [2024-05-15 03:18:24.037255] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:53.020 [2024-05-15 03:18:24.037278] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:53.020 [2024-05-15 03:18:24.037358] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f70240 00:23:53.020 [2024-05-15 03:18:24.037367] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:53.020 [2024-05-15 03:18:24.037541] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f6e8d0 00:23:53.021 [2024-05-15 03:18:24.037668] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f70240 00:23:53.021 [2024-05-15 03:18:24.037676] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f70240 00:23:53.021 [2024-05-15 03:18:24.037777] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.021 pt2 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.021 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.278 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.278 "name": "raid_bdev1", 00:23:53.278 "uuid": "c4825431-b64e-492a-abdd-3e11b996d3e6", 00:23:53.278 "strip_size_kb": 0, 00:23:53.278 "state": "online", 00:23:53.278 "raid_level": "raid1", 00:23:53.278 "superblock": true, 00:23:53.278 "num_base_bdevs": 2, 00:23:53.278 "num_base_bdevs_discovered": 1, 00:23:53.278 "num_base_bdevs_operational": 1, 00:23:53.278 "base_bdevs_list": [ 00:23:53.278 { 00:23:53.278 "name": null, 00:23:53.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.278 "is_configured": false, 00:23:53.278 "data_offset": 256, 00:23:53.278 "data_size": 7936 00:23:53.278 }, 00:23:53.278 { 00:23:53.278 "name": "pt2", 00:23:53.278 "uuid": "948942c3-999b-5890-af8b-549c7aa3c5f7", 00:23:53.278 "is_configured": true, 00:23:53.278 "data_offset": 256, 00:23:53.278 "data_size": 7936 00:23:53.278 } 00:23:53.278 ] 00:23:53.278 }' 00:23:53.278 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.278 03:18:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.845 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:23:53.845 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:53.845 03:18:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:23:54.104 [2024-05-15 03:18:25.174660] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # '[' c4825431-b64e-492a-abdd-3e11b996d3e6 '!=' c4825431-b64e-492a-abdd-3e11b996d3e6 ']' 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@568 -- # killprocess 4193504 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 4193504 ']' 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 4193504 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4193504 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4193504' 00:23:54.104 killing process with pid 4193504 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 4193504 00:23:54.104 [2024-05-15 03:18:25.247179] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.104 [2024-05-15 03:18:25.247236] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.104 [2024-05-15 03:18:25.247281] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.104 [2024-05-15 03:18:25.247291] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f70240 name raid_bdev1, state offline 00:23:54.104 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 4193504 00:23:54.363 [2024-05-15 03:18:25.263543] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.363 03:18:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # return 0 00:23:54.363 00:23:54.363 real 0m13.683s 00:23:54.363 user 0m25.262s 00:23:54.363 sys 0m1.967s 00:23:54.363 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.363 03:18:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.363 ************************************ 00:23:54.363 END TEST raid_superblock_test_4k 00:23:54.363 ************************************ 00:23:54.363 03:18:25 bdev_raid -- bdev/bdev_raid.sh@846 -- # '[' true = true ']' 00:23:54.363 03:18:25 bdev_raid -- bdev/bdev_raid.sh@847 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:23:54.363 03:18:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:23:54.363 03:18:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.622 03:18:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:54.622 ************************************ 00:23:54.622 START TEST raid_rebuild_test_sb_4k 00:23:54.622 ************************************ 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local verify=true 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # local strip_size 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@582 -- # local create_arg 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local data_offset 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # raid_pid=2464 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # waitforlisten 2464 /var/tmp/spdk-raid.sock 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 2464 ']' 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:54.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.622 03:18:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.622 [2024-05-15 03:18:25.626121] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:54.622 [2024-05-15 03:18:25.626178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464 ] 00:23:54.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:54.622 Zero copy mechanism will not be used. 00:23:54.622 [2024-05-15 03:18:25.725641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.880 [2024-05-15 03:18:25.825300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.880 [2024-05-15 03:18:25.887520] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.881 [2024-05-15 03:18:25.887553] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.447 03:18:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.447 03:18:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:23:55.447 03:18:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:23:55.447 03:18:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:23:55.705 BaseBdev1_malloc 00:23:55.705 03:18:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:55.964 [2024-05-15 03:18:27.080881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:55.964 [2024-05-15 03:18:27.080929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.964 [2024-05-15 03:18:27.080949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe13b00 00:23:55.964 [2024-05-15 03:18:27.080959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.964 [2024-05-15 03:18:27.082680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.964 [2024-05-15 03:18:27.082708] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:55.964 BaseBdev1 00:23:55.964 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:23:55.964 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:23:56.223 BaseBdev2_malloc 00:23:56.223 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.482 [2024-05-15 03:18:27.494689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.482 [2024-05-15 03:18:27.494734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.482 [2024-05-15 03:18:27.494751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb9860 00:23:56.482 [2024-05-15 03:18:27.494760] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.482 [2024-05-15 03:18:27.496324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.482 [2024-05-15 03:18:27.496352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.482 BaseBdev2 00:23:56.482 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:23:56.740 spare_malloc 00:23:56.740 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.740 spare_delay 00:23:56.740 03:18:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:56.999 [2024-05-15 03:18:28.060660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.999 [2024-05-15 03:18:28.060705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.999 [2024-05-15 03:18:28.060725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb9f50 00:23:56.999 [2024-05-15 03:18:28.060735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.999 [2024-05-15 03:18:28.062313] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.999 [2024-05-15 03:18:28.062339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.999 spare 00:23:56.999 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:57.257 [2024-05-15 03:18:28.229153] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.257 [2024-05-15 03:18:28.230510] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.257 [2024-05-15 03:18:28.230673] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xe0c460 00:23:57.257 [2024-05-15 03:18:28.230686] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:57.257 [2024-05-15 03:18:28.230894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfb88e0 00:23:57.257 [2024-05-15 03:18:28.231047] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe0c460 00:23:57.257 [2024-05-15 03:18:28.231055] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe0c460 00:23:57.257 [2024-05-15 03:18:28.231153] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.257 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.258 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.258 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.516 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.516 "name": "raid_bdev1", 00:23:57.516 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:23:57.516 "strip_size_kb": 0, 00:23:57.516 "state": "online", 00:23:57.516 "raid_level": "raid1", 00:23:57.516 "superblock": true, 00:23:57.516 "num_base_bdevs": 2, 00:23:57.516 "num_base_bdevs_discovered": 2, 00:23:57.516 "num_base_bdevs_operational": 2, 00:23:57.516 "base_bdevs_list": [ 00:23:57.516 { 00:23:57.516 "name": "BaseBdev1", 00:23:57.516 "uuid": "30e448cc-086a-5a8f-a675-df00242da1d5", 00:23:57.516 "is_configured": true, 00:23:57.516 "data_offset": 256, 00:23:57.516 "data_size": 7936 00:23:57.516 }, 00:23:57.516 { 00:23:57.516 "name": "BaseBdev2", 00:23:57.516 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:23:57.516 "is_configured": true, 00:23:57.516 "data_offset": 256, 00:23:57.516 "data_size": 7936 00:23:57.516 } 00:23:57.516 ] 00:23:57.516 }' 00:23:57.516 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.516 03:18:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.082 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:23:58.082 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:58.341 [2024-05-15 03:18:29.348340] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.341 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:23:58.341 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.341 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.599 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.600 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:58.858 [2024-05-15 03:18:29.861705] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe0a920 00:23:58.858 /dev/nbd0 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.858 1+0 records in 00:23:58.858 1+0 records out 00:23:58.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023751 s, 17.2 MB/s 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:23:58.858 03:18:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:59.862 7936+0 records in 00:23:59.862 7936+0 records out 00:23:59.862 32505856 bytes (33 MB, 31 MiB) copied, 0.810428 s, 40.1 MB/s 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.862 [2024-05-15 03:18:30.999556] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.862 03:18:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.862 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:00.120 [2024-05-15 03:18:31.240248] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.120 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.379 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.379 "name": "raid_bdev1", 00:24:00.379 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:00.379 "strip_size_kb": 0, 00:24:00.379 "state": "online", 00:24:00.379 "raid_level": "raid1", 00:24:00.379 "superblock": true, 00:24:00.379 "num_base_bdevs": 2, 00:24:00.379 "num_base_bdevs_discovered": 1, 00:24:00.379 "num_base_bdevs_operational": 1, 00:24:00.379 "base_bdevs_list": [ 00:24:00.379 { 00:24:00.379 "name": null, 00:24:00.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.379 "is_configured": false, 00:24:00.379 "data_offset": 256, 00:24:00.379 "data_size": 7936 00:24:00.379 }, 00:24:00.379 { 00:24:00.379 "name": "BaseBdev2", 00:24:00.379 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:00.379 "is_configured": true, 00:24:00.379 "data_offset": 256, 00:24:00.379 "data_size": 7936 00:24:00.379 } 00:24:00.379 ] 00:24:00.379 }' 00:24:00.379 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.379 03:18:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 03:18:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.316 [2024-05-15 03:18:32.383332] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.316 [2024-05-15 03:18:32.388247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe0ab50 00:24:01.316 [2024-05-15 03:18:32.390334] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.316 03:18:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # sleep 1 00:24:02.250 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.251 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.251 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.251 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.251 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.509 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.509 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.509 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.509 "name": "raid_bdev1", 00:24:02.509 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:02.509 "strip_size_kb": 0, 00:24:02.509 "state": "online", 00:24:02.509 "raid_level": "raid1", 00:24:02.509 "superblock": true, 00:24:02.509 "num_base_bdevs": 2, 00:24:02.509 "num_base_bdevs_discovered": 2, 00:24:02.509 "num_base_bdevs_operational": 2, 00:24:02.509 "process": { 00:24:02.509 "type": "rebuild", 00:24:02.509 "target": "spare", 00:24:02.509 "progress": { 00:24:02.509 "blocks": 3072, 00:24:02.509 "percent": 38 00:24:02.509 } 00:24:02.509 }, 00:24:02.509 "base_bdevs_list": [ 00:24:02.509 { 00:24:02.509 "name": "spare", 00:24:02.509 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:02.509 "is_configured": true, 00:24:02.509 "data_offset": 256, 00:24:02.509 "data_size": 7936 00:24:02.509 }, 00:24:02.509 { 00:24:02.509 "name": "BaseBdev2", 00:24:02.509 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:02.509 "is_configured": true, 00:24:02.509 "data_offset": 256, 00:24:02.509 "data_size": 7936 00:24:02.509 } 00:24:02.509 ] 00:24:02.509 }' 00:24:02.509 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.767 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.767 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.767 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.767 03:18:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:03.025 [2024-05-15 03:18:33.981196] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:03.025 [2024-05-15 03:18:34.002618] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:03.025 [2024-05-15 03:18:34.002663] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.025 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.284 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.284 "name": "raid_bdev1", 00:24:03.284 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:03.284 "strip_size_kb": 0, 00:24:03.284 "state": "online", 00:24:03.284 "raid_level": "raid1", 00:24:03.284 "superblock": true, 00:24:03.284 "num_base_bdevs": 2, 00:24:03.284 "num_base_bdevs_discovered": 1, 00:24:03.284 "num_base_bdevs_operational": 1, 00:24:03.284 "base_bdevs_list": [ 00:24:03.284 { 00:24:03.284 "name": null, 00:24:03.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.284 "is_configured": false, 00:24:03.284 "data_offset": 256, 00:24:03.284 "data_size": 7936 00:24:03.284 }, 00:24:03.284 { 00:24:03.284 "name": "BaseBdev2", 00:24:03.284 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:03.284 "is_configured": true, 00:24:03.284 "data_offset": 256, 00:24:03.284 "data_size": 7936 00:24:03.284 } 00:24:03.284 ] 00:24:03.284 }' 00:24:03.284 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.284 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.851 03:18:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.109 "name": "raid_bdev1", 00:24:04.109 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:04.109 "strip_size_kb": 0, 00:24:04.109 "state": "online", 00:24:04.109 "raid_level": "raid1", 00:24:04.109 "superblock": true, 00:24:04.109 "num_base_bdevs": 2, 00:24:04.109 "num_base_bdevs_discovered": 1, 00:24:04.109 "num_base_bdevs_operational": 1, 00:24:04.109 "base_bdevs_list": [ 00:24:04.109 { 00:24:04.109 "name": null, 00:24:04.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.109 "is_configured": false, 00:24:04.109 "data_offset": 256, 00:24:04.109 "data_size": 7936 00:24:04.109 }, 00:24:04.109 { 00:24:04.109 "name": "BaseBdev2", 00:24:04.109 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:04.109 "is_configured": true, 00:24:04.109 "data_offset": 256, 00:24:04.109 "data_size": 7936 00:24:04.109 } 00:24:04.109 ] 00:24:04.109 }' 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:04.109 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.367 [2024-05-15 03:18:35.463076] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.367 [2024-05-15 03:18:35.467867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaef860 00:24:04.367 [2024-05-15 03:18:35.469386] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.367 03:18:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # sleep 1 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.741 "name": "raid_bdev1", 00:24:05.741 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:05.741 "strip_size_kb": 0, 00:24:05.741 "state": "online", 00:24:05.741 "raid_level": "raid1", 00:24:05.741 "superblock": true, 00:24:05.741 "num_base_bdevs": 2, 00:24:05.741 "num_base_bdevs_discovered": 2, 00:24:05.741 "num_base_bdevs_operational": 2, 00:24:05.741 "process": { 00:24:05.741 "type": "rebuild", 00:24:05.741 "target": "spare", 00:24:05.741 "progress": { 00:24:05.741 "blocks": 3072, 00:24:05.741 "percent": 38 00:24:05.741 } 00:24:05.741 }, 00:24:05.741 "base_bdevs_list": [ 00:24:05.741 { 00:24:05.741 "name": "spare", 00:24:05.741 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:05.741 "is_configured": true, 00:24:05.741 "data_offset": 256, 00:24:05.741 "data_size": 7936 00:24:05.741 }, 00:24:05.741 { 00:24:05.741 "name": "BaseBdev2", 00:24:05.741 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:05.741 "is_configured": true, 00:24:05.741 "data_offset": 256, 00:24:05.741 "data_size": 7936 00:24:05.741 } 00:24:05.741 ] 00:24:05.741 }' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:24:05.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # local timeout=893 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.741 03:18:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.000 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.000 "name": "raid_bdev1", 00:24:06.000 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:06.000 "strip_size_kb": 0, 00:24:06.000 "state": "online", 00:24:06.000 "raid_level": "raid1", 00:24:06.000 "superblock": true, 00:24:06.000 "num_base_bdevs": 2, 00:24:06.000 "num_base_bdevs_discovered": 2, 00:24:06.000 "num_base_bdevs_operational": 2, 00:24:06.000 "process": { 00:24:06.000 "type": "rebuild", 00:24:06.000 "target": "spare", 00:24:06.000 "progress": { 00:24:06.000 "blocks": 4096, 00:24:06.000 "percent": 51 00:24:06.000 } 00:24:06.000 }, 00:24:06.000 "base_bdevs_list": [ 00:24:06.000 { 00:24:06.000 "name": "spare", 00:24:06.000 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:06.000 "is_configured": true, 00:24:06.000 "data_offset": 256, 00:24:06.000 "data_size": 7936 00:24:06.000 }, 00:24:06.000 { 00:24:06.000 "name": "BaseBdev2", 00:24:06.000 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:06.000 "is_configured": true, 00:24:06.000 "data_offset": 256, 00:24:06.000 "data_size": 7936 00:24:06.000 } 00:24:06.000 ] 00:24:06.000 }' 00:24:06.000 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.000 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.000 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.258 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.258 03:18:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # sleep 1 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.194 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.453 "name": "raid_bdev1", 00:24:07.453 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:07.453 "strip_size_kb": 0, 00:24:07.453 "state": "online", 00:24:07.453 "raid_level": "raid1", 00:24:07.453 "superblock": true, 00:24:07.453 "num_base_bdevs": 2, 00:24:07.453 "num_base_bdevs_discovered": 2, 00:24:07.453 "num_base_bdevs_operational": 2, 00:24:07.453 "process": { 00:24:07.453 "type": "rebuild", 00:24:07.453 "target": "spare", 00:24:07.453 "progress": { 00:24:07.453 "blocks": 7424, 00:24:07.453 "percent": 93 00:24:07.453 } 00:24:07.453 }, 00:24:07.453 "base_bdevs_list": [ 00:24:07.453 { 00:24:07.453 "name": "spare", 00:24:07.453 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:07.453 "is_configured": true, 00:24:07.453 "data_offset": 256, 00:24:07.453 "data_size": 7936 00:24:07.453 }, 00:24:07.453 { 00:24:07.453 "name": "BaseBdev2", 00:24:07.453 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:07.453 "is_configured": true, 00:24:07.453 "data_offset": 256, 00:24:07.453 "data_size": 7936 00:24:07.453 } 00:24:07.453 ] 00:24:07.453 }' 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.453 03:18:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # sleep 1 00:24:07.453 [2024-05-15 03:18:38.592843] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:07.453 [2024-05-15 03:18:38.592907] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:07.453 [2024-05-15 03:18:38.592990] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.830 "name": "raid_bdev1", 00:24:08.830 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:08.830 "strip_size_kb": 0, 00:24:08.830 "state": "online", 00:24:08.830 "raid_level": "raid1", 00:24:08.830 "superblock": true, 00:24:08.830 "num_base_bdevs": 2, 00:24:08.830 "num_base_bdevs_discovered": 2, 00:24:08.830 "num_base_bdevs_operational": 2, 00:24:08.830 "base_bdevs_list": [ 00:24:08.830 { 00:24:08.830 "name": "spare", 00:24:08.830 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:08.830 "is_configured": true, 00:24:08.830 "data_offset": 256, 00:24:08.830 "data_size": 7936 00:24:08.830 }, 00:24:08.830 { 00:24:08.830 "name": "BaseBdev2", 00:24:08.830 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:08.830 "is_configured": true, 00:24:08.830 "data_offset": 256, 00:24:08.830 "data_size": 7936 00:24:08.830 } 00:24:08.830 ] 00:24:08.830 }' 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # break 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.830 03:18:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:09.089 "name": "raid_bdev1", 00:24:09.089 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:09.089 "strip_size_kb": 0, 00:24:09.089 "state": "online", 00:24:09.089 "raid_level": "raid1", 00:24:09.089 "superblock": true, 00:24:09.089 "num_base_bdevs": 2, 00:24:09.089 "num_base_bdevs_discovered": 2, 00:24:09.089 "num_base_bdevs_operational": 2, 00:24:09.089 "base_bdevs_list": [ 00:24:09.089 { 00:24:09.089 "name": "spare", 00:24:09.089 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:09.089 "is_configured": true, 00:24:09.089 "data_offset": 256, 00:24:09.089 "data_size": 7936 00:24:09.089 }, 00:24:09.089 { 00:24:09.089 "name": "BaseBdev2", 00:24:09.089 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:09.089 "is_configured": true, 00:24:09.089 "data_offset": 256, 00:24:09.089 "data_size": 7936 00:24:09.089 } 00:24:09.089 ] 00:24:09.089 }' 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.089 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.348 "name": "raid_bdev1", 00:24:09.348 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:09.348 "strip_size_kb": 0, 00:24:09.348 "state": "online", 00:24:09.348 "raid_level": "raid1", 00:24:09.348 "superblock": true, 00:24:09.348 "num_base_bdevs": 2, 00:24:09.348 "num_base_bdevs_discovered": 2, 00:24:09.348 "num_base_bdevs_operational": 2, 00:24:09.348 "base_bdevs_list": [ 00:24:09.348 { 00:24:09.348 "name": "spare", 00:24:09.348 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:09.348 "is_configured": true, 00:24:09.348 "data_offset": 256, 00:24:09.348 "data_size": 7936 00:24:09.348 }, 00:24:09.348 { 00:24:09.348 "name": "BaseBdev2", 00:24:09.348 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:09.348 "is_configured": true, 00:24:09.348 "data_offset": 256, 00:24:09.348 "data_size": 7936 00:24:09.348 } 00:24:09.348 ] 00:24:09.348 }' 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.348 03:18:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:10.283 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:10.283 [2024-05-15 03:18:41.356839] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.283 [2024-05-15 03:18:41.356871] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.283 [2024-05-15 03:18:41.356927] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.283 [2024-05-15 03:18:41.356981] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.283 [2024-05-15 03:18:41.356990] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe0c460 name raid_bdev1, state offline 00:24:10.283 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.283 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@725 -- # jq length 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.542 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:10.800 /dev/nbd0 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.800 1+0 records in 00:24:10.800 1+0 records out 00:24:10.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246103 s, 16.6 MB/s 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.800 03:18:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:11.057 /dev/nbd1 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:11.057 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:11.315 1+0 records in 00:24:11.315 1+0 records out 00:24:11.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247406 s, 16.6 MB/s 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.315 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.574 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:11.833 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:11.833 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:11.833 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:11.833 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:24:11.834 03:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:12.092 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:12.351 [2024-05-15 03:18:43.340514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:12.351 [2024-05-15 03:18:43.340560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.351 [2024-05-15 03:18:43.340578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe13110 00:24:12.351 [2024-05-15 03:18:43.340587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.351 [2024-05-15 03:18:43.342283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.351 [2024-05-15 03:18:43.342313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:12.351 [2024-05-15 03:18:43.342376] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:12.351 [2024-05-15 03:18:43.342403] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.351 BaseBdev1 00:24:12.351 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:24:12.351 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:24:12.351 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:12.609 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:12.868 [2024-05-15 03:18:43.849897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:12.868 [2024-05-15 03:18:43.849936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.868 [2024-05-15 03:18:43.849951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe0cb00 00:24:12.868 [2024-05-15 03:18:43.849960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.868 [2024-05-15 03:18:43.850264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.868 [2024-05-15 03:18:43.850279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.868 [2024-05-15 03:18:43.850335] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:12.868 [2024-05-15 03:18:43.850345] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:12.868 [2024-05-15 03:18:43.850351] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.868 [2024-05-15 03:18:43.850364] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb8e70 name raid_bdev1, state configuring 00:24:12.868 [2024-05-15 03:18:43.850389] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.868 BaseBdev2 00:24:12.868 03:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:13.127 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.387 [2024-05-15 03:18:44.355243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.387 [2024-05-15 03:18:44.355277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.387 [2024-05-15 03:18:44.355291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe0b690 00:24:13.387 [2024-05-15 03:18:44.355300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.387 [2024-05-15 03:18:44.355621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.387 [2024-05-15 03:18:44.355635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.387 [2024-05-15 03:18:44.355700] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:13.387 [2024-05-15 03:18:44.355717] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.387 spare 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.387 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.387 [2024-05-15 03:18:44.456039] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xe0d3e0 00:24:13.387 [2024-05-15 03:18:44.456053] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:13.387 [2024-05-15 03:18:44.456230] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe14160 00:24:13.387 [2024-05-15 03:18:44.456373] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe0d3e0 00:24:13.387 [2024-05-15 03:18:44.456382] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe0d3e0 00:24:13.387 [2024-05-15 03:18:44.456480] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.646 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.646 "name": "raid_bdev1", 00:24:13.646 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:13.646 "strip_size_kb": 0, 00:24:13.646 "state": "online", 00:24:13.646 "raid_level": "raid1", 00:24:13.646 "superblock": true, 00:24:13.646 "num_base_bdevs": 2, 00:24:13.646 "num_base_bdevs_discovered": 2, 00:24:13.646 "num_base_bdevs_operational": 2, 00:24:13.646 "base_bdevs_list": [ 00:24:13.646 { 00:24:13.646 "name": "spare", 00:24:13.646 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:13.646 "is_configured": true, 00:24:13.646 "data_offset": 256, 00:24:13.646 "data_size": 7936 00:24:13.646 }, 00:24:13.646 { 00:24:13.646 "name": "BaseBdev2", 00:24:13.646 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:13.646 "is_configured": true, 00:24:13.646 "data_offset": 256, 00:24:13.646 "data_size": 7936 00:24:13.646 } 00:24:13.646 ] 00:24:13.646 }' 00:24:13.646 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.646 03:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.256 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.515 "name": "raid_bdev1", 00:24:14.515 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:14.515 "strip_size_kb": 0, 00:24:14.515 "state": "online", 00:24:14.515 "raid_level": "raid1", 00:24:14.515 "superblock": true, 00:24:14.515 "num_base_bdevs": 2, 00:24:14.515 "num_base_bdevs_discovered": 2, 00:24:14.515 "num_base_bdevs_operational": 2, 00:24:14.515 "base_bdevs_list": [ 00:24:14.515 { 00:24:14.515 "name": "spare", 00:24:14.515 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:14.515 "is_configured": true, 00:24:14.515 "data_offset": 256, 00:24:14.515 "data_size": 7936 00:24:14.515 }, 00:24:14.515 { 00:24:14.515 "name": "BaseBdev2", 00:24:14.515 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:14.515 "is_configured": true, 00:24:14.515 "data_offset": 256, 00:24:14.515 "data_size": 7936 00:24:14.515 } 00:24:14.515 ] 00:24:14.515 }' 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.515 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:14.775 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.775 03:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:15.034 [2024-05-15 03:18:46.080003] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.034 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.293 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.293 "name": "raid_bdev1", 00:24:15.293 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:15.293 "strip_size_kb": 0, 00:24:15.293 "state": "online", 00:24:15.293 "raid_level": "raid1", 00:24:15.293 "superblock": true, 00:24:15.293 "num_base_bdevs": 2, 00:24:15.293 "num_base_bdevs_discovered": 1, 00:24:15.293 "num_base_bdevs_operational": 1, 00:24:15.293 "base_bdevs_list": [ 00:24:15.293 { 00:24:15.293 "name": null, 00:24:15.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.293 "is_configured": false, 00:24:15.293 "data_offset": 256, 00:24:15.293 "data_size": 7936 00:24:15.293 }, 00:24:15.293 { 00:24:15.293 "name": "BaseBdev2", 00:24:15.293 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:15.293 "is_configured": true, 00:24:15.293 "data_offset": 256, 00:24:15.293 "data_size": 7936 00:24:15.293 } 00:24:15.293 ] 00:24:15.293 }' 00:24:15.293 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.293 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:15.860 03:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:16.119 [2024-05-15 03:18:47.203010] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:16.119 [2024-05-15 03:18:47.203161] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:16.119 [2024-05-15 03:18:47.203176] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:16.119 [2024-05-15 03:18:47.203201] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:16.119 [2024-05-15 03:18:47.207873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe12ff0 00:24:16.119 [2024-05-15 03:18:47.209364] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:16.119 03:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # sleep 1 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.495 "name": "raid_bdev1", 00:24:17.495 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:17.495 "strip_size_kb": 0, 00:24:17.495 "state": "online", 00:24:17.495 "raid_level": "raid1", 00:24:17.495 "superblock": true, 00:24:17.495 "num_base_bdevs": 2, 00:24:17.495 "num_base_bdevs_discovered": 2, 00:24:17.495 "num_base_bdevs_operational": 2, 00:24:17.495 "process": { 00:24:17.495 "type": "rebuild", 00:24:17.495 "target": "spare", 00:24:17.495 "progress": { 00:24:17.495 "blocks": 3072, 00:24:17.495 "percent": 38 00:24:17.495 } 00:24:17.495 }, 00:24:17.495 "base_bdevs_list": [ 00:24:17.495 { 00:24:17.495 "name": "spare", 00:24:17.495 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:17.495 "is_configured": true, 00:24:17.495 "data_offset": 256, 00:24:17.495 "data_size": 7936 00:24:17.495 }, 00:24:17.495 { 00:24:17.495 "name": "BaseBdev2", 00:24:17.495 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:17.495 "is_configured": true, 00:24:17.495 "data_offset": 256, 00:24:17.495 "data_size": 7936 00:24:17.495 } 00:24:17.495 ] 00:24:17.495 }' 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.495 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:17.754 [2024-05-15 03:18:48.808314] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:17.754 [2024-05-15 03:18:48.821640] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:17.754 [2024-05-15 03:18:48.821685] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.754 03:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.013 03:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.013 "name": "raid_bdev1", 00:24:18.013 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:18.013 "strip_size_kb": 0, 00:24:18.013 "state": "online", 00:24:18.013 "raid_level": "raid1", 00:24:18.013 "superblock": true, 00:24:18.013 "num_base_bdevs": 2, 00:24:18.013 "num_base_bdevs_discovered": 1, 00:24:18.013 "num_base_bdevs_operational": 1, 00:24:18.013 "base_bdevs_list": [ 00:24:18.013 { 00:24:18.013 "name": null, 00:24:18.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.013 "is_configured": false, 00:24:18.013 "data_offset": 256, 00:24:18.013 "data_size": 7936 00:24:18.013 }, 00:24:18.013 { 00:24:18.013 "name": "BaseBdev2", 00:24:18.013 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:18.013 "is_configured": true, 00:24:18.013 "data_offset": 256, 00:24:18.013 "data_size": 7936 00:24:18.013 } 00:24:18.013 ] 00:24:18.013 }' 00:24:18.013 03:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.013 03:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.581 03:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:18.840 [2024-05-15 03:18:49.961079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:18.840 [2024-05-15 03:18:49.961130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.840 [2024-05-15 03:18:49.961152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe116e0 00:24:18.840 [2024-05-15 03:18:49.961162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.840 [2024-05-15 03:18:49.961541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.840 [2024-05-15 03:18:49.961558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:18.840 [2024-05-15 03:18:49.961641] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:18.840 [2024-05-15 03:18:49.961653] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:18.840 [2024-05-15 03:18:49.961661] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:18.840 [2024-05-15 03:18:49.961676] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:18.840 [2024-05-15 03:18:49.966385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe2a9f0 00:24:18.840 spare 00:24:18.840 [2024-05-15 03:18:49.967806] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:18.840 03:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.216 03:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.216 "name": "raid_bdev1", 00:24:20.216 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:20.216 "strip_size_kb": 0, 00:24:20.216 "state": "online", 00:24:20.216 "raid_level": "raid1", 00:24:20.216 "superblock": true, 00:24:20.216 "num_base_bdevs": 2, 00:24:20.216 "num_base_bdevs_discovered": 2, 00:24:20.216 "num_base_bdevs_operational": 2, 00:24:20.216 "process": { 00:24:20.216 "type": "rebuild", 00:24:20.216 "target": "spare", 00:24:20.216 "progress": { 00:24:20.216 "blocks": 3072, 00:24:20.216 "percent": 38 00:24:20.216 } 00:24:20.216 }, 00:24:20.216 "base_bdevs_list": [ 00:24:20.216 { 00:24:20.216 "name": "spare", 00:24:20.216 "uuid": "30298b24-2582-5189-9e07-c730ec6986e8", 00:24:20.216 "is_configured": true, 00:24:20.216 "data_offset": 256, 00:24:20.216 "data_size": 7936 00:24:20.216 }, 00:24:20.216 { 00:24:20.216 "name": "BaseBdev2", 00:24:20.216 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:20.216 "is_configured": true, 00:24:20.216 "data_offset": 256, 00:24:20.216 "data_size": 7936 00:24:20.216 } 00:24:20.216 ] 00:24:20.216 }' 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.216 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:20.474 [2024-05-15 03:18:51.577428] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.474 [2024-05-15 03:18:51.580030] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:20.474 [2024-05-15 03:18:51.580068] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.474 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.732 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.732 "name": "raid_bdev1", 00:24:20.732 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:20.732 "strip_size_kb": 0, 00:24:20.732 "state": "online", 00:24:20.732 "raid_level": "raid1", 00:24:20.732 "superblock": true, 00:24:20.732 "num_base_bdevs": 2, 00:24:20.732 "num_base_bdevs_discovered": 1, 00:24:20.732 "num_base_bdevs_operational": 1, 00:24:20.732 "base_bdevs_list": [ 00:24:20.732 { 00:24:20.732 "name": null, 00:24:20.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.732 "is_configured": false, 00:24:20.732 "data_offset": 256, 00:24:20.732 "data_size": 7936 00:24:20.732 }, 00:24:20.732 { 00:24:20.732 "name": "BaseBdev2", 00:24:20.732 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:20.732 "is_configured": true, 00:24:20.732 "data_offset": 256, 00:24:20.732 "data_size": 7936 00:24:20.732 } 00:24:20.732 ] 00:24:20.732 }' 00:24:20.732 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.732 03:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.670 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.670 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.670 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:21.670 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:21.670 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:21.695 "name": "raid_bdev1", 00:24:21.695 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:21.695 "strip_size_kb": 0, 00:24:21.695 "state": "online", 00:24:21.695 "raid_level": "raid1", 00:24:21.695 "superblock": true, 00:24:21.695 "num_base_bdevs": 2, 00:24:21.695 "num_base_bdevs_discovered": 1, 00:24:21.695 "num_base_bdevs_operational": 1, 00:24:21.695 "base_bdevs_list": [ 00:24:21.695 { 00:24:21.695 "name": null, 00:24:21.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.695 "is_configured": false, 00:24:21.695 "data_offset": 256, 00:24:21.695 "data_size": 7936 00:24:21.695 }, 00:24:21.695 { 00:24:21.695 "name": "BaseBdev2", 00:24:21.695 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:21.695 "is_configured": true, 00:24:21.695 "data_offset": 256, 00:24:21.695 "data_size": 7936 00:24:21.695 } 00:24:21.695 ] 00:24:21.695 }' 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:21.695 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.954 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:21.954 03:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:21.954 03:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:22.213 [2024-05-15 03:18:53.317016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:22.213 [2024-05-15 03:18:53.317063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.213 [2024-05-15 03:18:53.317084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe13d30 00:24:22.213 [2024-05-15 03:18:53.317094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.213 [2024-05-15 03:18:53.317444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.213 [2024-05-15 03:18:53.317460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:22.213 [2024-05-15 03:18:53.317522] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:22.213 [2024-05-15 03:18:53.317533] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:22.213 [2024-05-15 03:18:53.317540] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:22.213 BaseBdev1 00:24:22.213 03:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # sleep 1 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.602 "name": "raid_bdev1", 00:24:23.602 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:23.602 "strip_size_kb": 0, 00:24:23.602 "state": "online", 00:24:23.602 "raid_level": "raid1", 00:24:23.602 "superblock": true, 00:24:23.602 "num_base_bdevs": 2, 00:24:23.602 "num_base_bdevs_discovered": 1, 00:24:23.602 "num_base_bdevs_operational": 1, 00:24:23.602 "base_bdevs_list": [ 00:24:23.602 { 00:24:23.602 "name": null, 00:24:23.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.602 "is_configured": false, 00:24:23.602 "data_offset": 256, 00:24:23.602 "data_size": 7936 00:24:23.602 }, 00:24:23.602 { 00:24:23.602 "name": "BaseBdev2", 00:24:23.602 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:23.602 "is_configured": true, 00:24:23.602 "data_offset": 256, 00:24:23.602 "data_size": 7936 00:24:23.602 } 00:24:23.602 ] 00:24:23.602 }' 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.602 03:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.169 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.428 "name": "raid_bdev1", 00:24:24.428 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:24.428 "strip_size_kb": 0, 00:24:24.428 "state": "online", 00:24:24.428 "raid_level": "raid1", 00:24:24.428 "superblock": true, 00:24:24.428 "num_base_bdevs": 2, 00:24:24.428 "num_base_bdevs_discovered": 1, 00:24:24.428 "num_base_bdevs_operational": 1, 00:24:24.428 "base_bdevs_list": [ 00:24:24.428 { 00:24:24.428 "name": null, 00:24:24.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.428 "is_configured": false, 00:24:24.428 "data_offset": 256, 00:24:24.428 "data_size": 7936 00:24:24.428 }, 00:24:24.428 { 00:24:24.428 "name": "BaseBdev2", 00:24:24.428 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:24.428 "is_configured": true, 00:24:24.428 "data_offset": 256, 00:24:24.428 "data_size": 7936 00:24:24.428 } 00:24:24.428 ] 00:24:24.428 }' 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:24.428 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:24.687 [2024-05-15 03:18:55.795695] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.687 [2024-05-15 03:18:55.795825] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:24.687 [2024-05-15 03:18:55.795839] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:24.687 request: 00:24:24.687 { 00:24:24.687 "raid_bdev": "raid_bdev1", 00:24:24.687 "base_bdev": "BaseBdev1", 00:24:24.687 "method": "bdev_raid_add_base_bdev", 00:24:24.687 "req_id": 1 00:24:24.687 } 00:24:24.687 Got JSON-RPC error response 00:24:24.687 response: 00:24:24.687 { 00:24:24.687 "code": -22, 00:24:24.687 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:24.687 } 00:24:24.687 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:24:24.687 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.687 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.687 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.687 03:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # sleep 1 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.064 03:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.064 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.064 "name": "raid_bdev1", 00:24:26.064 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:26.064 "strip_size_kb": 0, 00:24:26.064 "state": "online", 00:24:26.064 "raid_level": "raid1", 00:24:26.064 "superblock": true, 00:24:26.064 "num_base_bdevs": 2, 00:24:26.064 "num_base_bdevs_discovered": 1, 00:24:26.064 "num_base_bdevs_operational": 1, 00:24:26.064 "base_bdevs_list": [ 00:24:26.064 { 00:24:26.064 "name": null, 00:24:26.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.064 "is_configured": false, 00:24:26.064 "data_offset": 256, 00:24:26.064 "data_size": 7936 00:24:26.064 }, 00:24:26.064 { 00:24:26.064 "name": "BaseBdev2", 00:24:26.064 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:26.064 "is_configured": true, 00:24:26.064 "data_offset": 256, 00:24:26.064 "data_size": 7936 00:24:26.064 } 00:24:26.064 ] 00:24:26.064 }' 00:24:26.064 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.064 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.632 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.891 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:26.891 "name": "raid_bdev1", 00:24:26.891 "uuid": "d751c1a3-ad51-474d-8cb7-2653a5bc571f", 00:24:26.891 "strip_size_kb": 0, 00:24:26.891 "state": "online", 00:24:26.891 "raid_level": "raid1", 00:24:26.891 "superblock": true, 00:24:26.891 "num_base_bdevs": 2, 00:24:26.891 "num_base_bdevs_discovered": 1, 00:24:26.891 "num_base_bdevs_operational": 1, 00:24:26.891 "base_bdevs_list": [ 00:24:26.891 { 00:24:26.891 "name": null, 00:24:26.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.891 "is_configured": false, 00:24:26.891 "data_offset": 256, 00:24:26.891 "data_size": 7936 00:24:26.891 }, 00:24:26.891 { 00:24:26.891 "name": "BaseBdev2", 00:24:26.891 "uuid": "d6b43ccb-35bc-503c-813e-2a4888bfec8d", 00:24:26.891 "is_configured": true, 00:24:26.891 "data_offset": 256, 00:24:26.891 "data_size": 7936 00:24:26.891 } 00:24:26.891 ] 00:24:26.891 }' 00:24:26.891 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:26.891 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:26.891 03:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # killprocess 2464 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 2464 ']' 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 2464 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.891 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2464 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2464' 00:24:27.150 killing process with pid 2464 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@965 -- # kill 2464 00:24:27.150 Received shutdown signal, test time was about 60.000000 seconds 00:24:27.150 00:24:27.150 Latency(us) 00:24:27.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.150 =================================================================================================================== 00:24:27.150 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:27.150 [2024-05-15 03:18:58.059174] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.150 [2024-05-15 03:18:58.059270] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.150 [2024-05-15 03:18:58.059310] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.150 [2024-05-15 03:18:58.059320] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe0d3e0 name raid_bdev1, state offline 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # wait 2464 00:24:27.150 [2024-05-15 03:18:58.084316] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@797 -- # return 0 00:24:27.150 00:24:27.150 real 0m32.747s 00:24:27.150 user 0m52.474s 00:24:27.150 sys 0m4.307s 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:27.150 03:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.150 ************************************ 00:24:27.150 END TEST raid_rebuild_test_sb_4k 00:24:27.150 ************************************ 00:24:27.408 03:18:58 bdev_raid -- bdev/bdev_raid.sh@850 -- # base_malloc_params='-m 32' 00:24:27.408 03:18:58 bdev_raid -- bdev/bdev_raid.sh@851 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:27.408 03:18:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:27.408 03:18:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:27.408 03:18:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.408 ************************************ 00:24:27.408 START TEST raid_state_function_test_sb_md_separate 00:24:27.408 ************************************ 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=8301 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 8301' 00:24:27.408 Process raid pid: 8301 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 8301 /var/tmp/spdk-raid.sock 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 8301 ']' 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:27.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:27.408 03:18:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.408 [2024-05-15 03:18:58.448705] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:27.408 [2024-05-15 03:18:58.448757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.408 [2024-05-15 03:18:58.547060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.667 [2024-05-15 03:18:58.641203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.667 [2024-05-15 03:18:58.699756] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.667 [2024-05-15 03:18:58.699790] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:28.254 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:28.254 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:24:28.254 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:28.512 [2024-05-15 03:18:59.630775] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:28.512 [2024-05-15 03:18:59.630816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:28.512 [2024-05-15 03:18:59.630825] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.512 [2024-05-15 03:18:59.630833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.512 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.771 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.771 "name": "Existed_Raid", 00:24:28.771 "uuid": "1b5d2950-bf24-4899-a816-86e5c4cf8207", 00:24:28.771 "strip_size_kb": 0, 00:24:28.771 "state": "configuring", 00:24:28.771 "raid_level": "raid1", 00:24:28.771 "superblock": true, 00:24:28.771 "num_base_bdevs": 2, 00:24:28.771 "num_base_bdevs_discovered": 0, 00:24:28.771 "num_base_bdevs_operational": 2, 00:24:28.771 "base_bdevs_list": [ 00:24:28.771 { 00:24:28.771 "name": "BaseBdev1", 00:24:28.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.771 "is_configured": false, 00:24:28.771 "data_offset": 0, 00:24:28.771 "data_size": 0 00:24:28.771 }, 00:24:28.771 { 00:24:28.771 "name": "BaseBdev2", 00:24:28.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.771 "is_configured": false, 00:24:28.771 "data_offset": 0, 00:24:28.771 "data_size": 0 00:24:28.771 } 00:24:28.771 ] 00:24:28.771 }' 00:24:28.771 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.771 03:18:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.707 03:19:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.707 [2024-05-15 03:19:00.765655] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.707 [2024-05-15 03:19:00.765683] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e1dc0 name Existed_Raid, state configuring 00:24:29.707 03:19:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:29.966 [2024-05-15 03:19:01.022363] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:29.966 [2024-05-15 03:19:01.022392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:29.966 [2024-05-15 03:19:01.022405] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:29.966 [2024-05-15 03:19:01.022414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:29.966 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:30.224 [2024-05-15 03:19:01.285035] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.224 BaseBdev1 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:30.224 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.483 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:30.741 [ 00:24:30.741 { 00:24:30.741 "name": "BaseBdev1", 00:24:30.741 "aliases": [ 00:24:30.741 "f26c035d-03d0-4b67-a16c-df38da2c22df" 00:24:30.741 ], 00:24:30.741 "product_name": "Malloc disk", 00:24:30.741 "block_size": 4096, 00:24:30.741 "num_blocks": 8192, 00:24:30.741 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:30.741 "md_size": 32, 00:24:30.741 "md_interleave": false, 00:24:30.741 "dif_type": 0, 00:24:30.741 "assigned_rate_limits": { 00:24:30.741 "rw_ios_per_sec": 0, 00:24:30.741 "rw_mbytes_per_sec": 0, 00:24:30.741 "r_mbytes_per_sec": 0, 00:24:30.741 "w_mbytes_per_sec": 0 00:24:30.741 }, 00:24:30.741 "claimed": true, 00:24:30.742 "claim_type": "exclusive_write", 00:24:30.742 "zoned": false, 00:24:30.742 "supported_io_types": { 00:24:30.742 "read": true, 00:24:30.742 "write": true, 00:24:30.742 "unmap": true, 00:24:30.742 "write_zeroes": true, 00:24:30.742 "flush": true, 00:24:30.742 "reset": true, 00:24:30.742 "compare": false, 00:24:30.742 "compare_and_write": false, 00:24:30.742 "abort": true, 00:24:30.742 "nvme_admin": false, 00:24:30.742 "nvme_io": false 00:24:30.742 }, 00:24:30.742 "memory_domains": [ 00:24:30.742 { 00:24:30.742 "dma_device_id": "system", 00:24:30.742 "dma_device_type": 1 00:24:30.742 }, 00:24:30.742 { 00:24:30.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.742 "dma_device_type": 2 00:24:30.742 } 00:24:30.742 ], 00:24:30.742 "driver_specific": {} 00:24:30.742 } 00:24:30.742 ] 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.742 03:19:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.000 03:19:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.000 "name": "Existed_Raid", 00:24:31.000 "uuid": "1e426c03-4df6-4bdd-a114-cadcf7a3507c", 00:24:31.000 "strip_size_kb": 0, 00:24:31.000 "state": "configuring", 00:24:31.000 "raid_level": "raid1", 00:24:31.000 "superblock": true, 00:24:31.000 "num_base_bdevs": 2, 00:24:31.000 "num_base_bdevs_discovered": 1, 00:24:31.000 "num_base_bdevs_operational": 2, 00:24:31.000 "base_bdevs_list": [ 00:24:31.000 { 00:24:31.000 "name": "BaseBdev1", 00:24:31.000 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:31.000 "is_configured": true, 00:24:31.000 "data_offset": 256, 00:24:31.000 "data_size": 7936 00:24:31.000 }, 00:24:31.000 { 00:24:31.000 "name": "BaseBdev2", 00:24:31.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.000 "is_configured": false, 00:24:31.000 "data_offset": 0, 00:24:31.000 "data_size": 0 00:24:31.000 } 00:24:31.000 ] 00:24:31.000 }' 00:24:31.000 03:19:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.000 03:19:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.568 03:19:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.827 [2024-05-15 03:19:02.925429] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.827 [2024-05-15 03:19:02.925466] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e2060 name Existed_Raid, state configuring 00:24:31.827 03:19:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:32.085 [2024-05-15 03:19:03.182144] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.086 [2024-05-15 03:19:03.183667] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:32.086 [2024-05-15 03:19:03.183698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.086 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.344 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.344 "name": "Existed_Raid", 00:24:32.344 "uuid": "3a7046d5-7db7-47e1-a70d-3d83b79d3492", 00:24:32.344 "strip_size_kb": 0, 00:24:32.344 "state": "configuring", 00:24:32.344 "raid_level": "raid1", 00:24:32.344 "superblock": true, 00:24:32.344 "num_base_bdevs": 2, 00:24:32.344 "num_base_bdevs_discovered": 1, 00:24:32.344 "num_base_bdevs_operational": 2, 00:24:32.344 "base_bdevs_list": [ 00:24:32.344 { 00:24:32.344 "name": "BaseBdev1", 00:24:32.344 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:32.344 "is_configured": true, 00:24:32.344 "data_offset": 256, 00:24:32.344 "data_size": 7936 00:24:32.344 }, 00:24:32.344 { 00:24:32.344 "name": "BaseBdev2", 00:24:32.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.345 "is_configured": false, 00:24:32.345 "data_offset": 0, 00:24:32.345 "data_size": 0 00:24:32.345 } 00:24:32.345 ] 00:24:32.345 }' 00:24:32.345 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.345 03:19:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:33.277 [2024-05-15 03:19:04.329346] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:33.277 [2024-05-15 03:19:04.329490] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x23e16b0 00:24:33.277 [2024-05-15 03:19:04.329501] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:33.277 [2024-05-15 03:19:04.329563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x257ecf0 00:24:33.277 [2024-05-15 03:19:04.329666] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23e16b0 00:24:33.277 [2024-05-15 03:19:04.329674] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23e16b0 00:24:33.277 [2024-05-15 03:19:04.329740] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.277 BaseBdev2 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:33.277 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.571 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:33.830 [ 00:24:33.830 { 00:24:33.830 "name": "BaseBdev2", 00:24:33.830 "aliases": [ 00:24:33.830 "97ddd2b4-349a-423d-ac72-067e31b39faf" 00:24:33.830 ], 00:24:33.830 "product_name": "Malloc disk", 00:24:33.830 "block_size": 4096, 00:24:33.830 "num_blocks": 8192, 00:24:33.830 "uuid": "97ddd2b4-349a-423d-ac72-067e31b39faf", 00:24:33.830 "md_size": 32, 00:24:33.830 "md_interleave": false, 00:24:33.830 "dif_type": 0, 00:24:33.830 "assigned_rate_limits": { 00:24:33.830 "rw_ios_per_sec": 0, 00:24:33.830 "rw_mbytes_per_sec": 0, 00:24:33.830 "r_mbytes_per_sec": 0, 00:24:33.830 "w_mbytes_per_sec": 0 00:24:33.830 }, 00:24:33.830 "claimed": true, 00:24:33.830 "claim_type": "exclusive_write", 00:24:33.830 "zoned": false, 00:24:33.830 "supported_io_types": { 00:24:33.830 "read": true, 00:24:33.830 "write": true, 00:24:33.830 "unmap": true, 00:24:33.830 "write_zeroes": true, 00:24:33.830 "flush": true, 00:24:33.830 "reset": true, 00:24:33.830 "compare": false, 00:24:33.830 "compare_and_write": false, 00:24:33.830 "abort": true, 00:24:33.830 "nvme_admin": false, 00:24:33.830 "nvme_io": false 00:24:33.830 }, 00:24:33.830 "memory_domains": [ 00:24:33.830 { 00:24:33.830 "dma_device_id": "system", 00:24:33.830 "dma_device_type": 1 00:24:33.830 }, 00:24:33.830 { 00:24:33.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.830 "dma_device_type": 2 00:24:33.830 } 00:24:33.830 ], 00:24:33.830 "driver_specific": {} 00:24:33.830 } 00:24:33.830 ] 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.830 03:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.089 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.089 "name": "Existed_Raid", 00:24:34.089 "uuid": "3a7046d5-7db7-47e1-a70d-3d83b79d3492", 00:24:34.089 "strip_size_kb": 0, 00:24:34.089 "state": "online", 00:24:34.089 "raid_level": "raid1", 00:24:34.089 "superblock": true, 00:24:34.089 "num_base_bdevs": 2, 00:24:34.089 "num_base_bdevs_discovered": 2, 00:24:34.089 "num_base_bdevs_operational": 2, 00:24:34.089 "base_bdevs_list": [ 00:24:34.089 { 00:24:34.089 "name": "BaseBdev1", 00:24:34.089 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:34.089 "is_configured": true, 00:24:34.089 "data_offset": 256, 00:24:34.089 "data_size": 7936 00:24:34.089 }, 00:24:34.089 { 00:24:34.089 "name": "BaseBdev2", 00:24:34.089 "uuid": "97ddd2b4-349a-423d-ac72-067e31b39faf", 00:24:34.089 "is_configured": true, 00:24:34.089 "data_offset": 256, 00:24:34.089 "data_size": 7936 00:24:34.089 } 00:24:34.089 ] 00:24:34.089 }' 00:24:34.089 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.089 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:34.654 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:34.913 [2024-05-15 03:19:05.937957] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.913 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:34.913 "name": "Existed_Raid", 00:24:34.913 "aliases": [ 00:24:34.913 "3a7046d5-7db7-47e1-a70d-3d83b79d3492" 00:24:34.913 ], 00:24:34.913 "product_name": "Raid Volume", 00:24:34.913 "block_size": 4096, 00:24:34.913 "num_blocks": 7936, 00:24:34.913 "uuid": "3a7046d5-7db7-47e1-a70d-3d83b79d3492", 00:24:34.913 "md_size": 32, 00:24:34.913 "md_interleave": false, 00:24:34.913 "dif_type": 0, 00:24:34.913 "assigned_rate_limits": { 00:24:34.913 "rw_ios_per_sec": 0, 00:24:34.913 "rw_mbytes_per_sec": 0, 00:24:34.913 "r_mbytes_per_sec": 0, 00:24:34.913 "w_mbytes_per_sec": 0 00:24:34.913 }, 00:24:34.913 "claimed": false, 00:24:34.913 "zoned": false, 00:24:34.913 "supported_io_types": { 00:24:34.913 "read": true, 00:24:34.913 "write": true, 00:24:34.913 "unmap": false, 00:24:34.913 "write_zeroes": true, 00:24:34.913 "flush": false, 00:24:34.913 "reset": true, 00:24:34.913 "compare": false, 00:24:34.913 "compare_and_write": false, 00:24:34.913 "abort": false, 00:24:34.913 "nvme_admin": false, 00:24:34.913 "nvme_io": false 00:24:34.913 }, 00:24:34.913 "memory_domains": [ 00:24:34.913 { 00:24:34.913 "dma_device_id": "system", 00:24:34.913 "dma_device_type": 1 00:24:34.913 }, 00:24:34.913 { 00:24:34.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.913 "dma_device_type": 2 00:24:34.913 }, 00:24:34.913 { 00:24:34.913 "dma_device_id": "system", 00:24:34.913 "dma_device_type": 1 00:24:34.913 }, 00:24:34.913 { 00:24:34.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.913 "dma_device_type": 2 00:24:34.913 } 00:24:34.913 ], 00:24:34.913 "driver_specific": { 00:24:34.913 "raid": { 00:24:34.913 "uuid": "3a7046d5-7db7-47e1-a70d-3d83b79d3492", 00:24:34.913 "strip_size_kb": 0, 00:24:34.913 "state": "online", 00:24:34.913 "raid_level": "raid1", 00:24:34.913 "superblock": true, 00:24:34.913 "num_base_bdevs": 2, 00:24:34.913 "num_base_bdevs_discovered": 2, 00:24:34.913 "num_base_bdevs_operational": 2, 00:24:34.913 "base_bdevs_list": [ 00:24:34.913 { 00:24:34.913 "name": "BaseBdev1", 00:24:34.913 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:34.913 "is_configured": true, 00:24:34.913 "data_offset": 256, 00:24:34.913 "data_size": 7936 00:24:34.913 }, 00:24:34.913 { 00:24:34.913 "name": "BaseBdev2", 00:24:34.913 "uuid": "97ddd2b4-349a-423d-ac72-067e31b39faf", 00:24:34.913 "is_configured": true, 00:24:34.913 "data_offset": 256, 00:24:34.913 "data_size": 7936 00:24:34.913 } 00:24:34.913 ] 00:24:34.913 } 00:24:34.913 } 00:24:34.913 }' 00:24:34.913 03:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:34.913 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:34.913 BaseBdev2' 00:24:34.913 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:34.913 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:34.913 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:35.172 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:35.172 "name": "BaseBdev1", 00:24:35.172 "aliases": [ 00:24:35.172 "f26c035d-03d0-4b67-a16c-df38da2c22df" 00:24:35.172 ], 00:24:35.172 "product_name": "Malloc disk", 00:24:35.172 "block_size": 4096, 00:24:35.172 "num_blocks": 8192, 00:24:35.172 "uuid": "f26c035d-03d0-4b67-a16c-df38da2c22df", 00:24:35.172 "md_size": 32, 00:24:35.172 "md_interleave": false, 00:24:35.172 "dif_type": 0, 00:24:35.172 "assigned_rate_limits": { 00:24:35.172 "rw_ios_per_sec": 0, 00:24:35.172 "rw_mbytes_per_sec": 0, 00:24:35.172 "r_mbytes_per_sec": 0, 00:24:35.172 "w_mbytes_per_sec": 0 00:24:35.172 }, 00:24:35.172 "claimed": true, 00:24:35.172 "claim_type": "exclusive_write", 00:24:35.172 "zoned": false, 00:24:35.172 "supported_io_types": { 00:24:35.172 "read": true, 00:24:35.172 "write": true, 00:24:35.172 "unmap": true, 00:24:35.172 "write_zeroes": true, 00:24:35.172 "flush": true, 00:24:35.172 "reset": true, 00:24:35.172 "compare": false, 00:24:35.172 "compare_and_write": false, 00:24:35.172 "abort": true, 00:24:35.172 "nvme_admin": false, 00:24:35.172 "nvme_io": false 00:24:35.172 }, 00:24:35.172 "memory_domains": [ 00:24:35.172 { 00:24:35.172 "dma_device_id": "system", 00:24:35.172 "dma_device_type": 1 00:24:35.172 }, 00:24:35.172 { 00:24:35.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.172 "dma_device_type": 2 00:24:35.172 } 00:24:35.172 ], 00:24:35.172 "driver_specific": {} 00:24:35.172 }' 00:24:35.172 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.172 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.431 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.690 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:35.690 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:35.690 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:35.690 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:35.947 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:35.948 "name": "BaseBdev2", 00:24:35.948 "aliases": [ 00:24:35.948 "97ddd2b4-349a-423d-ac72-067e31b39faf" 00:24:35.948 ], 00:24:35.948 "product_name": "Malloc disk", 00:24:35.948 "block_size": 4096, 00:24:35.948 "num_blocks": 8192, 00:24:35.948 "uuid": "97ddd2b4-349a-423d-ac72-067e31b39faf", 00:24:35.948 "md_size": 32, 00:24:35.948 "md_interleave": false, 00:24:35.948 "dif_type": 0, 00:24:35.948 "assigned_rate_limits": { 00:24:35.948 "rw_ios_per_sec": 0, 00:24:35.948 "rw_mbytes_per_sec": 0, 00:24:35.948 "r_mbytes_per_sec": 0, 00:24:35.948 "w_mbytes_per_sec": 0 00:24:35.948 }, 00:24:35.948 "claimed": true, 00:24:35.948 "claim_type": "exclusive_write", 00:24:35.948 "zoned": false, 00:24:35.948 "supported_io_types": { 00:24:35.948 "read": true, 00:24:35.948 "write": true, 00:24:35.948 "unmap": true, 00:24:35.948 "write_zeroes": true, 00:24:35.948 "flush": true, 00:24:35.948 "reset": true, 00:24:35.948 "compare": false, 00:24:35.948 "compare_and_write": false, 00:24:35.948 "abort": true, 00:24:35.948 "nvme_admin": false, 00:24:35.948 "nvme_io": false 00:24:35.948 }, 00:24:35.948 "memory_domains": [ 00:24:35.948 { 00:24:35.948 "dma_device_id": "system", 00:24:35.948 "dma_device_type": 1 00:24:35.948 }, 00:24:35.948 { 00:24:35.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.948 "dma_device_type": 2 00:24:35.948 } 00:24:35.948 ], 00:24:35.948 "driver_specific": {} 00:24:35.948 }' 00:24:35.948 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.948 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.948 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:35.948 03:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.948 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.948 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:35.948 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:36.205 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:36.463 [2024-05-15 03:19:07.485897] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:24:36.463 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.464 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.721 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.721 "name": "Existed_Raid", 00:24:36.721 "uuid": "3a7046d5-7db7-47e1-a70d-3d83b79d3492", 00:24:36.721 "strip_size_kb": 0, 00:24:36.721 "state": "online", 00:24:36.721 "raid_level": "raid1", 00:24:36.721 "superblock": true, 00:24:36.721 "num_base_bdevs": 2, 00:24:36.721 "num_base_bdevs_discovered": 1, 00:24:36.721 "num_base_bdevs_operational": 1, 00:24:36.721 "base_bdevs_list": [ 00:24:36.721 { 00:24:36.721 "name": null, 00:24:36.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.721 "is_configured": false, 00:24:36.721 "data_offset": 256, 00:24:36.721 "data_size": 7936 00:24:36.721 }, 00:24:36.721 { 00:24:36.721 "name": "BaseBdev2", 00:24:36.721 "uuid": "97ddd2b4-349a-423d-ac72-067e31b39faf", 00:24:36.721 "is_configured": true, 00:24:36.721 "data_offset": 256, 00:24:36.721 "data_size": 7936 00:24:36.721 } 00:24:36.721 ] 00:24:36.721 }' 00:24:36.721 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.721 03:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:37.286 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:37.286 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.286 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:37.545 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:37.545 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:37.545 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:37.804 [2024-05-15 03:19:08.864190] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.804 [2024-05-15 03:19:08.864260] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:37.804 [2024-05-15 03:19:08.875499] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.804 [2024-05-15 03:19:08.875566] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.804 [2024-05-15 03:19:08.875576] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e16b0 name Existed_Raid, state offline 00:24:37.804 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:37.804 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:37.804 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.804 03:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 8301 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 8301 ']' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 8301 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 8301 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 8301' 00:24:38.062 killing process with pid 8301 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 8301 00:24:38.062 [2024-05-15 03:19:09.191244] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.062 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 8301 00:24:38.062 [2024-05-15 03:19:09.192111] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:38.321 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:24:38.321 00:24:38.321 real 0m11.031s 00:24:38.321 user 0m20.044s 00:24:38.321 sys 0m1.627s 00:24:38.321 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:38.321 03:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 ************************************ 00:24:38.321 END TEST raid_state_function_test_sb_md_separate 00:24:38.321 ************************************ 00:24:38.321 03:19:09 bdev_raid -- bdev/bdev_raid.sh@852 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:24:38.321 03:19:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:38.321 03:19:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:38.321 03:19:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:38.580 ************************************ 00:24:38.580 START TEST raid_superblock_test_md_separate 00:24:38.580 ************************************ 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:38.580 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=10333 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 10333 /var/tmp/spdk-raid.sock 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 10333 ']' 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:38.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:38.581 03:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.581 [2024-05-15 03:19:09.550256] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:38.581 [2024-05-15 03:19:09.550314] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10333 ] 00:24:38.581 [2024-05-15 03:19:09.647394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.839 [2024-05-15 03:19:09.741334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.839 [2024-05-15 03:19:09.800971] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.839 [2024-05-15 03:19:09.801011] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:39.406 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:24:39.664 malloc1 00:24:39.664 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:39.923 [2024-05-15 03:19:10.840116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:39.923 [2024-05-15 03:19:10.840159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.923 [2024-05-15 03:19:10.840179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1689500 00:24:39.923 [2024-05-15 03:19:10.840188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.923 [2024-05-15 03:19:10.841694] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.923 [2024-05-15 03:19:10.841720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:39.923 pt1 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:39.923 03:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:24:40.181 malloc2 00:24:40.181 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.439 [2024-05-15 03:19:11.370887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.439 [2024-05-15 03:19:11.370928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.439 [2024-05-15 03:19:11.370945] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x179c260 00:24:40.439 [2024-05-15 03:19:11.370955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.439 [2024-05-15 03:19:11.372349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.439 [2024-05-15 03:19:11.372375] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.439 pt2 00:24:40.439 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:40.439 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:40.439 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:24:40.697 [2024-05-15 03:19:11.619561] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:40.697 [2024-05-15 03:19:11.620912] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.697 [2024-05-15 03:19:11.621066] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1689cc0 00:24:40.697 [2024-05-15 03:19:11.621082] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:40.697 [2024-05-15 03:19:11.621149] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16077a0 00:24:40.697 [2024-05-15 03:19:11.621269] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1689cc0 00:24:40.697 [2024-05-15 03:19:11.621277] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1689cc0 00:24:40.697 [2024-05-15 03:19:11.621349] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.697 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.956 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.956 "name": "raid_bdev1", 00:24:40.956 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:40.956 "strip_size_kb": 0, 00:24:40.956 "state": "online", 00:24:40.956 "raid_level": "raid1", 00:24:40.956 "superblock": true, 00:24:40.956 "num_base_bdevs": 2, 00:24:40.956 "num_base_bdevs_discovered": 2, 00:24:40.956 "num_base_bdevs_operational": 2, 00:24:40.956 "base_bdevs_list": [ 00:24:40.956 { 00:24:40.956 "name": "pt1", 00:24:40.956 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:40.956 "is_configured": true, 00:24:40.956 "data_offset": 256, 00:24:40.956 "data_size": 7936 00:24:40.956 }, 00:24:40.956 { 00:24:40.956 "name": "pt2", 00:24:40.956 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:40.956 "is_configured": true, 00:24:40.956 "data_offset": 256, 00:24:40.956 "data_size": 7936 00:24:40.956 } 00:24:40.956 ] 00:24:40.956 }' 00:24:40.956 03:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.956 03:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:41.521 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:41.779 [2024-05-15 03:19:12.746803] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.779 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:41.779 "name": "raid_bdev1", 00:24:41.779 "aliases": [ 00:24:41.779 "ab582b0c-9e99-4843-a0b9-c3034821a740" 00:24:41.779 ], 00:24:41.779 "product_name": "Raid Volume", 00:24:41.779 "block_size": 4096, 00:24:41.779 "num_blocks": 7936, 00:24:41.779 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:41.779 "md_size": 32, 00:24:41.779 "md_interleave": false, 00:24:41.779 "dif_type": 0, 00:24:41.779 "assigned_rate_limits": { 00:24:41.779 "rw_ios_per_sec": 0, 00:24:41.779 "rw_mbytes_per_sec": 0, 00:24:41.779 "r_mbytes_per_sec": 0, 00:24:41.779 "w_mbytes_per_sec": 0 00:24:41.779 }, 00:24:41.779 "claimed": false, 00:24:41.779 "zoned": false, 00:24:41.779 "supported_io_types": { 00:24:41.779 "read": true, 00:24:41.779 "write": true, 00:24:41.779 "unmap": false, 00:24:41.779 "write_zeroes": true, 00:24:41.779 "flush": false, 00:24:41.779 "reset": true, 00:24:41.779 "compare": false, 00:24:41.779 "compare_and_write": false, 00:24:41.779 "abort": false, 00:24:41.779 "nvme_admin": false, 00:24:41.779 "nvme_io": false 00:24:41.779 }, 00:24:41.779 "memory_domains": [ 00:24:41.779 { 00:24:41.779 "dma_device_id": "system", 00:24:41.779 "dma_device_type": 1 00:24:41.779 }, 00:24:41.779 { 00:24:41.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.779 "dma_device_type": 2 00:24:41.779 }, 00:24:41.779 { 00:24:41.779 "dma_device_id": "system", 00:24:41.779 "dma_device_type": 1 00:24:41.779 }, 00:24:41.779 { 00:24:41.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.779 "dma_device_type": 2 00:24:41.779 } 00:24:41.779 ], 00:24:41.779 "driver_specific": { 00:24:41.779 "raid": { 00:24:41.779 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:41.779 "strip_size_kb": 0, 00:24:41.779 "state": "online", 00:24:41.779 "raid_level": "raid1", 00:24:41.779 "superblock": true, 00:24:41.779 "num_base_bdevs": 2, 00:24:41.779 "num_base_bdevs_discovered": 2, 00:24:41.779 "num_base_bdevs_operational": 2, 00:24:41.779 "base_bdevs_list": [ 00:24:41.779 { 00:24:41.779 "name": "pt1", 00:24:41.779 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:41.779 "is_configured": true, 00:24:41.779 "data_offset": 256, 00:24:41.779 "data_size": 7936 00:24:41.779 }, 00:24:41.780 { 00:24:41.780 "name": "pt2", 00:24:41.780 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:41.780 "is_configured": true, 00:24:41.780 "data_offset": 256, 00:24:41.780 "data_size": 7936 00:24:41.780 } 00:24:41.780 ] 00:24:41.780 } 00:24:41.780 } 00:24:41.780 }' 00:24:41.780 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:41.780 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:41.780 pt2' 00:24:41.780 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:41.780 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:41.780 03:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:42.037 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:42.037 "name": "pt1", 00:24:42.037 "aliases": [ 00:24:42.037 "1857ec3c-4875-557c-bfbd-0e8f455c3e2a" 00:24:42.037 ], 00:24:42.037 "product_name": "passthru", 00:24:42.037 "block_size": 4096, 00:24:42.037 "num_blocks": 8192, 00:24:42.038 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:42.038 "md_size": 32, 00:24:42.038 "md_interleave": false, 00:24:42.038 "dif_type": 0, 00:24:42.038 "assigned_rate_limits": { 00:24:42.038 "rw_ios_per_sec": 0, 00:24:42.038 "rw_mbytes_per_sec": 0, 00:24:42.038 "r_mbytes_per_sec": 0, 00:24:42.038 "w_mbytes_per_sec": 0 00:24:42.038 }, 00:24:42.038 "claimed": true, 00:24:42.038 "claim_type": "exclusive_write", 00:24:42.038 "zoned": false, 00:24:42.038 "supported_io_types": { 00:24:42.038 "read": true, 00:24:42.038 "write": true, 00:24:42.038 "unmap": true, 00:24:42.038 "write_zeroes": true, 00:24:42.038 "flush": true, 00:24:42.038 "reset": true, 00:24:42.038 "compare": false, 00:24:42.038 "compare_and_write": false, 00:24:42.038 "abort": true, 00:24:42.038 "nvme_admin": false, 00:24:42.038 "nvme_io": false 00:24:42.038 }, 00:24:42.038 "memory_domains": [ 00:24:42.038 { 00:24:42.038 "dma_device_id": "system", 00:24:42.038 "dma_device_type": 1 00:24:42.038 }, 00:24:42.038 { 00:24:42.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.038 "dma_device_type": 2 00:24:42.038 } 00:24:42.038 ], 00:24:42.038 "driver_specific": { 00:24:42.038 "passthru": { 00:24:42.038 "name": "pt1", 00:24:42.038 "base_bdev_name": "malloc1" 00:24:42.038 } 00:24:42.038 } 00:24:42.038 }' 00:24:42.038 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:42.038 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:42.038 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:42.038 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:42.296 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:42.554 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:42.554 "name": "pt2", 00:24:42.554 "aliases": [ 00:24:42.554 "a24600a4-765e-5e41-923d-ffe0da4f88eb" 00:24:42.554 ], 00:24:42.554 "product_name": "passthru", 00:24:42.554 "block_size": 4096, 00:24:42.554 "num_blocks": 8192, 00:24:42.554 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:42.554 "md_size": 32, 00:24:42.554 "md_interleave": false, 00:24:42.554 "dif_type": 0, 00:24:42.554 "assigned_rate_limits": { 00:24:42.554 "rw_ios_per_sec": 0, 00:24:42.554 "rw_mbytes_per_sec": 0, 00:24:42.554 "r_mbytes_per_sec": 0, 00:24:42.554 "w_mbytes_per_sec": 0 00:24:42.554 }, 00:24:42.554 "claimed": true, 00:24:42.554 "claim_type": "exclusive_write", 00:24:42.554 "zoned": false, 00:24:42.554 "supported_io_types": { 00:24:42.554 "read": true, 00:24:42.554 "write": true, 00:24:42.554 "unmap": true, 00:24:42.554 "write_zeroes": true, 00:24:42.554 "flush": true, 00:24:42.554 "reset": true, 00:24:42.554 "compare": false, 00:24:42.554 "compare_and_write": false, 00:24:42.554 "abort": true, 00:24:42.554 "nvme_admin": false, 00:24:42.554 "nvme_io": false 00:24:42.554 }, 00:24:42.554 "memory_domains": [ 00:24:42.554 { 00:24:42.554 "dma_device_id": "system", 00:24:42.554 "dma_device_type": 1 00:24:42.554 }, 00:24:42.554 { 00:24:42.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.554 "dma_device_type": 2 00:24:42.554 } 00:24:42.554 ], 00:24:42.554 "driver_specific": { 00:24:42.554 "passthru": { 00:24:42.554 "name": "pt2", 00:24:42.554 "base_bdev_name": "malloc2" 00:24:42.554 } 00:24:42.554 } 00:24:42.554 }' 00:24:42.554 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:42.812 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:43.070 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:43.070 03:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:43.070 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:43.070 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:43.070 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:43.070 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:43.328 [2024-05-15 03:19:14.311005] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.328 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab582b0c-9e99-4843-a0b9-c3034821a740 00:24:43.328 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z ab582b0c-9e99-4843-a0b9-c3034821a740 ']' 00:24:43.328 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:43.587 [2024-05-15 03:19:14.563443] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:43.587 [2024-05-15 03:19:14.563464] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:43.587 [2024-05-15 03:19:14.563517] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:43.587 [2024-05-15 03:19:14.563569] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:43.587 [2024-05-15 03:19:14.563578] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1689cc0 name raid_bdev1, state offline 00:24:43.587 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.587 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:43.845 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:43.845 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:43.845 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:43.845 03:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:44.104 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:44.104 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:44.362 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:44.362 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.620 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:44.621 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:44.879 [2024-05-15 03:19:15.818736] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:44.879 [2024-05-15 03:19:15.820169] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:44.879 [2024-05-15 03:19:15.820224] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:44.879 [2024-05-15 03:19:15.820261] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:44.879 [2024-05-15 03:19:15.820278] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.879 [2024-05-15 03:19:15.820286] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x179cd90 name raid_bdev1, state configuring 00:24:44.879 request: 00:24:44.879 { 00:24:44.879 "name": "raid_bdev1", 00:24:44.879 "raid_level": "raid1", 00:24:44.879 "base_bdevs": [ 00:24:44.879 "malloc1", 00:24:44.879 "malloc2" 00:24:44.879 ], 00:24:44.879 "superblock": false, 00:24:44.879 "method": "bdev_raid_create", 00:24:44.879 "req_id": 1 00:24:44.879 } 00:24:44.879 Got JSON-RPC error response 00:24:44.879 response: 00:24:44.879 { 00:24:44.879 "code": -17, 00:24:44.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:44.879 } 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.879 03:19:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:45.137 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:45.137 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:45.137 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:45.395 [2024-05-15 03:19:16.328024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:45.395 [2024-05-15 03:19:16.328061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.395 [2024-05-15 03:19:16.328080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1607580 00:24:45.395 [2024-05-15 03:19:16.328090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.395 [2024-05-15 03:19:16.329608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.395 [2024-05-15 03:19:16.329635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:45.395 [2024-05-15 03:19:16.329677] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:45.395 [2024-05-15 03:19:16.329699] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:45.395 pt1 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.395 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.654 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.654 "name": "raid_bdev1", 00:24:45.654 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:45.654 "strip_size_kb": 0, 00:24:45.654 "state": "configuring", 00:24:45.654 "raid_level": "raid1", 00:24:45.654 "superblock": true, 00:24:45.654 "num_base_bdevs": 2, 00:24:45.654 "num_base_bdevs_discovered": 1, 00:24:45.654 "num_base_bdevs_operational": 2, 00:24:45.654 "base_bdevs_list": [ 00:24:45.654 { 00:24:45.654 "name": "pt1", 00:24:45.654 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:45.654 "is_configured": true, 00:24:45.654 "data_offset": 256, 00:24:45.654 "data_size": 7936 00:24:45.654 }, 00:24:45.654 { 00:24:45.654 "name": null, 00:24:45.654 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:45.654 "is_configured": false, 00:24:45.654 "data_offset": 256, 00:24:45.654 "data_size": 7936 00:24:45.654 } 00:24:45.654 ] 00:24:45.654 }' 00:24:45.654 03:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.654 03:19:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.220 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:46.220 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:46.220 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:46.220 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:46.479 [2024-05-15 03:19:17.459065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:46.479 [2024-05-15 03:19:17.459110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.479 [2024-05-15 03:19:17.459127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1608e70 00:24:46.479 [2024-05-15 03:19:17.459137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.479 [2024-05-15 03:19:17.459319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.479 [2024-05-15 03:19:17.459333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:46.479 [2024-05-15 03:19:17.459375] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:46.479 [2024-05-15 03:19:17.459392] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:46.479 [2024-05-15 03:19:17.459486] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x179f910 00:24:46.479 [2024-05-15 03:19:17.459495] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:46.479 [2024-05-15 03:19:17.459553] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179cd40 00:24:46.479 [2024-05-15 03:19:17.459662] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x179f910 00:24:46.479 [2024-05-15 03:19:17.459670] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x179f910 00:24:46.479 [2024-05-15 03:19:17.459738] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.479 pt2 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.479 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.737 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:46.737 "name": "raid_bdev1", 00:24:46.737 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:46.737 "strip_size_kb": 0, 00:24:46.737 "state": "online", 00:24:46.737 "raid_level": "raid1", 00:24:46.737 "superblock": true, 00:24:46.737 "num_base_bdevs": 2, 00:24:46.737 "num_base_bdevs_discovered": 2, 00:24:46.737 "num_base_bdevs_operational": 2, 00:24:46.737 "base_bdevs_list": [ 00:24:46.737 { 00:24:46.737 "name": "pt1", 00:24:46.737 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:46.738 "is_configured": true, 00:24:46.738 "data_offset": 256, 00:24:46.738 "data_size": 7936 00:24:46.738 }, 00:24:46.738 { 00:24:46.738 "name": "pt2", 00:24:46.738 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:46.738 "is_configured": true, 00:24:46.738 "data_offset": 256, 00:24:46.738 "data_size": 7936 00:24:46.738 } 00:24:46.738 ] 00:24:46.738 }' 00:24:46.738 03:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:46.738 03:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:47.303 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:47.561 [2024-05-15 03:19:18.598357] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.561 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:47.561 "name": "raid_bdev1", 00:24:47.561 "aliases": [ 00:24:47.561 "ab582b0c-9e99-4843-a0b9-c3034821a740" 00:24:47.561 ], 00:24:47.561 "product_name": "Raid Volume", 00:24:47.561 "block_size": 4096, 00:24:47.561 "num_blocks": 7936, 00:24:47.561 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:47.561 "md_size": 32, 00:24:47.561 "md_interleave": false, 00:24:47.561 "dif_type": 0, 00:24:47.561 "assigned_rate_limits": { 00:24:47.561 "rw_ios_per_sec": 0, 00:24:47.561 "rw_mbytes_per_sec": 0, 00:24:47.561 "r_mbytes_per_sec": 0, 00:24:47.561 "w_mbytes_per_sec": 0 00:24:47.561 }, 00:24:47.561 "claimed": false, 00:24:47.561 "zoned": false, 00:24:47.561 "supported_io_types": { 00:24:47.561 "read": true, 00:24:47.561 "write": true, 00:24:47.561 "unmap": false, 00:24:47.561 "write_zeroes": true, 00:24:47.561 "flush": false, 00:24:47.561 "reset": true, 00:24:47.561 "compare": false, 00:24:47.561 "compare_and_write": false, 00:24:47.561 "abort": false, 00:24:47.561 "nvme_admin": false, 00:24:47.561 "nvme_io": false 00:24:47.561 }, 00:24:47.561 "memory_domains": [ 00:24:47.561 { 00:24:47.561 "dma_device_id": "system", 00:24:47.561 "dma_device_type": 1 00:24:47.561 }, 00:24:47.561 { 00:24:47.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.561 "dma_device_type": 2 00:24:47.561 }, 00:24:47.561 { 00:24:47.561 "dma_device_id": "system", 00:24:47.561 "dma_device_type": 1 00:24:47.561 }, 00:24:47.561 { 00:24:47.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.561 "dma_device_type": 2 00:24:47.561 } 00:24:47.561 ], 00:24:47.561 "driver_specific": { 00:24:47.561 "raid": { 00:24:47.561 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:47.561 "strip_size_kb": 0, 00:24:47.561 "state": "online", 00:24:47.561 "raid_level": "raid1", 00:24:47.561 "superblock": true, 00:24:47.561 "num_base_bdevs": 2, 00:24:47.561 "num_base_bdevs_discovered": 2, 00:24:47.561 "num_base_bdevs_operational": 2, 00:24:47.561 "base_bdevs_list": [ 00:24:47.561 { 00:24:47.561 "name": "pt1", 00:24:47.561 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:47.561 "is_configured": true, 00:24:47.561 "data_offset": 256, 00:24:47.561 "data_size": 7936 00:24:47.561 }, 00:24:47.561 { 00:24:47.561 "name": "pt2", 00:24:47.561 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:47.561 "is_configured": true, 00:24:47.561 "data_offset": 256, 00:24:47.561 "data_size": 7936 00:24:47.561 } 00:24:47.561 ] 00:24:47.561 } 00:24:47.561 } 00:24:47.561 }' 00:24:47.561 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:47.561 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:47.562 pt2' 00:24:47.562 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:47.562 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:47.562 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:47.820 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:47.820 "name": "pt1", 00:24:47.820 "aliases": [ 00:24:47.820 "1857ec3c-4875-557c-bfbd-0e8f455c3e2a" 00:24:47.820 ], 00:24:47.820 "product_name": "passthru", 00:24:47.820 "block_size": 4096, 00:24:47.820 "num_blocks": 8192, 00:24:47.820 "uuid": "1857ec3c-4875-557c-bfbd-0e8f455c3e2a", 00:24:47.820 "md_size": 32, 00:24:47.820 "md_interleave": false, 00:24:47.820 "dif_type": 0, 00:24:47.820 "assigned_rate_limits": { 00:24:47.820 "rw_ios_per_sec": 0, 00:24:47.820 "rw_mbytes_per_sec": 0, 00:24:47.820 "r_mbytes_per_sec": 0, 00:24:47.820 "w_mbytes_per_sec": 0 00:24:47.820 }, 00:24:47.820 "claimed": true, 00:24:47.820 "claim_type": "exclusive_write", 00:24:47.820 "zoned": false, 00:24:47.820 "supported_io_types": { 00:24:47.820 "read": true, 00:24:47.820 "write": true, 00:24:47.820 "unmap": true, 00:24:47.820 "write_zeroes": true, 00:24:47.820 "flush": true, 00:24:47.820 "reset": true, 00:24:47.820 "compare": false, 00:24:47.820 "compare_and_write": false, 00:24:47.820 "abort": true, 00:24:47.820 "nvme_admin": false, 00:24:47.820 "nvme_io": false 00:24:47.820 }, 00:24:47.820 "memory_domains": [ 00:24:47.820 { 00:24:47.820 "dma_device_id": "system", 00:24:47.820 "dma_device_type": 1 00:24:47.820 }, 00:24:47.820 { 00:24:47.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.820 "dma_device_type": 2 00:24:47.820 } 00:24:47.820 ], 00:24:47.820 "driver_specific": { 00:24:47.820 "passthru": { 00:24:47.820 "name": "pt1", 00:24:47.820 "base_bdev_name": "malloc1" 00:24:47.820 } 00:24:47.820 } 00:24:47.820 }' 00:24:47.820 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:47.820 03:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:48.078 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:48.341 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:48.341 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:48.341 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:48.341 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:48.341 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:48.601 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:48.601 "name": "pt2", 00:24:48.601 "aliases": [ 00:24:48.601 "a24600a4-765e-5e41-923d-ffe0da4f88eb" 00:24:48.601 ], 00:24:48.601 "product_name": "passthru", 00:24:48.601 "block_size": 4096, 00:24:48.601 "num_blocks": 8192, 00:24:48.601 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:48.601 "md_size": 32, 00:24:48.601 "md_interleave": false, 00:24:48.601 "dif_type": 0, 00:24:48.601 "assigned_rate_limits": { 00:24:48.601 "rw_ios_per_sec": 0, 00:24:48.601 "rw_mbytes_per_sec": 0, 00:24:48.601 "r_mbytes_per_sec": 0, 00:24:48.601 "w_mbytes_per_sec": 0 00:24:48.601 }, 00:24:48.601 "claimed": true, 00:24:48.601 "claim_type": "exclusive_write", 00:24:48.601 "zoned": false, 00:24:48.601 "supported_io_types": { 00:24:48.601 "read": true, 00:24:48.601 "write": true, 00:24:48.601 "unmap": true, 00:24:48.601 "write_zeroes": true, 00:24:48.601 "flush": true, 00:24:48.601 "reset": true, 00:24:48.601 "compare": false, 00:24:48.601 "compare_and_write": false, 00:24:48.601 "abort": true, 00:24:48.601 "nvme_admin": false, 00:24:48.601 "nvme_io": false 00:24:48.601 }, 00:24:48.601 "memory_domains": [ 00:24:48.601 { 00:24:48.601 "dma_device_id": "system", 00:24:48.601 "dma_device_type": 1 00:24:48.601 }, 00:24:48.601 { 00:24:48.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.602 "dma_device_type": 2 00:24:48.602 } 00:24:48.602 ], 00:24:48.602 "driver_specific": { 00:24:48.602 "passthru": { 00:24:48.602 "name": "pt2", 00:24:48.602 "base_bdev_name": "malloc2" 00:24:48.602 } 00:24:48.602 } 00:24:48.602 }' 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:24:48.602 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:48.859 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:48.859 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:24:48.859 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:48.860 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:48.860 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:24:48.860 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:48.860 03:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:49.152 [2024-05-15 03:19:20.146662] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.152 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' ab582b0c-9e99-4843-a0b9-c3034821a740 '!=' ab582b0c-9e99-4843-a0b9-c3034821a740 ']' 00:24:49.152 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:49.152 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:49.152 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:24:49.152 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:49.421 [2024-05-15 03:19:20.407136] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.421 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.680 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:49.680 "name": "raid_bdev1", 00:24:49.680 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:49.680 "strip_size_kb": 0, 00:24:49.680 "state": "online", 00:24:49.680 "raid_level": "raid1", 00:24:49.680 "superblock": true, 00:24:49.680 "num_base_bdevs": 2, 00:24:49.680 "num_base_bdevs_discovered": 1, 00:24:49.680 "num_base_bdevs_operational": 1, 00:24:49.680 "base_bdevs_list": [ 00:24:49.680 { 00:24:49.680 "name": null, 00:24:49.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.680 "is_configured": false, 00:24:49.680 "data_offset": 256, 00:24:49.680 "data_size": 7936 00:24:49.680 }, 00:24:49.680 { 00:24:49.680 "name": "pt2", 00:24:49.680 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:49.680 "is_configured": true, 00:24:49.680 "data_offset": 256, 00:24:49.680 "data_size": 7936 00:24:49.680 } 00:24:49.680 ] 00:24:49.680 }' 00:24:49.680 03:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:49.680 03:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.248 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:50.506 [2024-05-15 03:19:21.546153] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:50.506 [2024-05-15 03:19:21.546178] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.506 [2024-05-15 03:19:21.546231] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.506 [2024-05-15 03:19:21.546276] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.506 [2024-05-15 03:19:21.546285] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x179f910 name raid_bdev1, state offline 00:24:50.506 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.506 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:50.765 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:50.765 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:50.765 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:50.765 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:50.765 03:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:24:51.023 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:51.281 [2024-05-15 03:19:22.316170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:51.281 [2024-05-15 03:19:22.316212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.281 [2024-05-15 03:19:22.316228] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x179b310 00:24:51.281 [2024-05-15 03:19:22.316238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.281 [2024-05-15 03:19:22.317757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.281 [2024-05-15 03:19:22.317781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:51.281 [2024-05-15 03:19:22.317823] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:51.281 [2024-05-15 03:19:22.317845] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.281 [2024-05-15 03:19:22.317933] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x179fd50 00:24:51.281 [2024-05-15 03:19:22.317941] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:51.281 [2024-05-15 03:19:22.317998] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179d090 00:24:51.281 [2024-05-15 03:19:22.318101] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x179fd50 00:24:51.281 [2024-05-15 03:19:22.318109] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x179fd50 00:24:51.281 [2024-05-15 03:19:22.318176] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.281 pt2 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.281 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.282 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.282 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.540 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.540 "name": "raid_bdev1", 00:24:51.540 "uuid": "ab582b0c-9e99-4843-a0b9-c3034821a740", 00:24:51.540 "strip_size_kb": 0, 00:24:51.540 "state": "online", 00:24:51.540 "raid_level": "raid1", 00:24:51.540 "superblock": true, 00:24:51.540 "num_base_bdevs": 2, 00:24:51.540 "num_base_bdevs_discovered": 1, 00:24:51.540 "num_base_bdevs_operational": 1, 00:24:51.540 "base_bdevs_list": [ 00:24:51.540 { 00:24:51.540 "name": null, 00:24:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.540 "is_configured": false, 00:24:51.540 "data_offset": 256, 00:24:51.540 "data_size": 7936 00:24:51.540 }, 00:24:51.540 { 00:24:51.540 "name": "pt2", 00:24:51.540 "uuid": "a24600a4-765e-5e41-923d-ffe0da4f88eb", 00:24:51.540 "is_configured": true, 00:24:51.540 "data_offset": 256, 00:24:51.540 "data_size": 7936 00:24:51.540 } 00:24:51.540 ] 00:24:51.540 }' 00:24:51.540 03:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.540 03:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.106 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:24:52.106 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:52.106 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:24:52.365 [2024-05-15 03:19:23.459430] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # '[' ab582b0c-9e99-4843-a0b9-c3034821a740 '!=' ab582b0c-9e99-4843-a0b9-c3034821a740 ']' 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@568 -- # killprocess 10333 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 10333 ']' 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 10333 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:52.365 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 10333 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 10333' 00:24:52.623 killing process with pid 10333 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 10333 00:24:52.623 [2024-05-15 03:19:23.530029] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:52.623 [2024-05-15 03:19:23.530086] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.623 [2024-05-15 03:19:23.530132] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.623 [2024-05-15 03:19:23.530141] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x179fd50 name raid_bdev1, state offline 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 10333 00:24:52.623 [2024-05-15 03:19:23.550972] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # return 0 00:24:52.623 00:24:52.623 real 0m14.281s 00:24:52.623 user 0m26.322s 00:24:52.623 sys 0m2.104s 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.623 03:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 ************************************ 00:24:52.623 END TEST raid_superblock_test_md_separate 00:24:52.623 ************************************ 00:24:52.881 03:19:23 bdev_raid -- bdev/bdev_raid.sh@853 -- # '[' true = true ']' 00:24:52.881 03:19:23 bdev_raid -- bdev/bdev_raid.sh@854 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:24:52.882 03:19:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:24:52.882 03:19:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:52.882 03:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.882 ************************************ 00:24:52.882 START TEST raid_rebuild_test_sb_md_separate 00:24:52.882 ************************************ 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local verify=true 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # local strip_size 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@582 -- # local create_arg 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local data_offset 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # raid_pid=12882 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # waitforlisten 12882 /var/tmp/spdk-raid.sock 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 12882 ']' 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:52.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:52.882 03:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.882 [2024-05-15 03:19:23.912302] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:52.882 [2024-05-15 03:19:23.912355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12882 ] 00:24:52.882 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:52.882 Zero copy mechanism will not be used. 00:24:52.882 [2024-05-15 03:19:24.009772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.140 [2024-05-15 03:19:24.105092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.140 [2024-05-15 03:19:24.167370] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.140 [2024-05-15 03:19:24.167403] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.707 03:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:53.707 03:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:24:53.707 03:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:24:53.707 03:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:24:53.965 BaseBdev1_malloc 00:24:53.965 03:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:54.223 [2024-05-15 03:19:25.349167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:54.223 [2024-05-15 03:19:25.349210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.223 [2024-05-15 03:19:25.349232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaaa540 00:24:54.223 [2024-05-15 03:19:25.349242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.223 [2024-05-15 03:19:25.350804] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.223 [2024-05-15 03:19:25.350831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:54.223 BaseBdev1 00:24:54.223 03:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:24:54.223 03:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:24:54.482 BaseBdev2_malloc 00:24:54.482 03:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:54.741 [2024-05-15 03:19:25.856076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:54.741 [2024-05-15 03:19:25.856117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.741 [2024-05-15 03:19:25.856136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbbc410 00:24:54.741 [2024-05-15 03:19:25.856146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.741 [2024-05-15 03:19:25.857553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.741 [2024-05-15 03:19:25.857577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:54.741 BaseBdev2 00:24:54.741 03:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:24:54.999 spare_malloc 00:24:54.999 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:55.258 spare_delay 00:24:55.258 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:55.516 [2024-05-15 03:19:26.611381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:55.516 [2024-05-15 03:19:26.611423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.516 [2024-05-15 03:19:26.611447] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbbfed0 00:24:55.516 [2024-05-15 03:19:26.611459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.516 [2024-05-15 03:19:26.612945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.516 [2024-05-15 03:19:26.612970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:55.516 spare 00:24:55.516 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:55.775 [2024-05-15 03:19:26.864203] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.775 [2024-05-15 03:19:26.865552] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.775 [2024-05-15 03:19:26.865710] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xbc24a0 00:24:55.775 [2024-05-15 03:19:26.865722] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:55.775 [2024-05-15 03:19:26.865793] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa287d0 00:24:55.775 [2024-05-15 03:19:26.865920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbc24a0 00:24:55.775 [2024-05-15 03:19:26.865929] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbc24a0 00:24:55.775 [2024-05-15 03:19:26.865999] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.775 03:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.033 03:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.033 "name": "raid_bdev1", 00:24:56.033 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:24:56.033 "strip_size_kb": 0, 00:24:56.033 "state": "online", 00:24:56.033 "raid_level": "raid1", 00:24:56.033 "superblock": true, 00:24:56.033 "num_base_bdevs": 2, 00:24:56.033 "num_base_bdevs_discovered": 2, 00:24:56.033 "num_base_bdevs_operational": 2, 00:24:56.033 "base_bdevs_list": [ 00:24:56.033 { 00:24:56.033 "name": "BaseBdev1", 00:24:56.033 "uuid": "dbe6f92f-7cdd-5798-824b-ca56df4a04ac", 00:24:56.033 "is_configured": true, 00:24:56.033 "data_offset": 256, 00:24:56.033 "data_size": 7936 00:24:56.033 }, 00:24:56.033 { 00:24:56.033 "name": "BaseBdev2", 00:24:56.033 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:24:56.033 "is_configured": true, 00:24:56.033 "data_offset": 256, 00:24:56.033 "data_size": 7936 00:24:56.033 } 00:24:56.033 ] 00:24:56.033 }' 00:24:56.033 03:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.033 03:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.969 03:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:56.969 03:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:24:56.969 [2024-05-15 03:19:28.003458] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:56.969 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:24:56.969 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.969 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' true = true ']' 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@630 -- # local write_unit_size 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:57.229 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:57.487 [2024-05-15 03:19:28.520646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbc2f40 00:24:57.487 /dev/nbd0 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:57.487 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.488 1+0 records in 00:24:57.488 1+0 records out 00:24:57.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226033 s, 18.1 MB/s 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # '[' raid1 = raid5f ']' 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@638 -- # write_unit_size=1 00:24:57.488 03:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:58.423 7936+0 records in 00:24:58.423 7936+0 records out 00:24:58.423 32505856 bytes (33 MB, 31 MiB) copied, 0.72345 s, 44.9 MB/s 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@641 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:58.423 [2024-05-15 03:19:29.578462] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:58.423 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.681 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.681 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:58.681 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:58.681 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.681 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:58.681 [2024-05-15 03:19:29.819147] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.940 03:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.198 03:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.198 "name": "raid_bdev1", 00:24:59.198 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:24:59.198 "strip_size_kb": 0, 00:24:59.198 "state": "online", 00:24:59.198 "raid_level": "raid1", 00:24:59.198 "superblock": true, 00:24:59.198 "num_base_bdevs": 2, 00:24:59.198 "num_base_bdevs_discovered": 1, 00:24:59.198 "num_base_bdevs_operational": 1, 00:24:59.198 "base_bdevs_list": [ 00:24:59.198 { 00:24:59.198 "name": null, 00:24:59.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.198 "is_configured": false, 00:24:59.198 "data_offset": 256, 00:24:59.198 "data_size": 7936 00:24:59.198 }, 00:24:59.198 { 00:24:59.198 "name": "BaseBdev2", 00:24:59.198 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:24:59.198 "is_configured": true, 00:24:59.198 "data_offset": 256, 00:24:59.198 "data_size": 7936 00:24:59.198 } 00:24:59.198 ] 00:24:59.198 }' 00:24:59.198 03:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.198 03:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.764 03:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:00.022 [2024-05-15 03:19:30.950242] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:00.022 [2024-05-15 03:19:30.952467] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa292f0 00:25:00.023 [2024-05-15 03:19:30.954526] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:00.023 03:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # sleep 1 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.958 03:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.217 "name": "raid_bdev1", 00:25:01.217 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:01.217 "strip_size_kb": 0, 00:25:01.217 "state": "online", 00:25:01.217 "raid_level": "raid1", 00:25:01.217 "superblock": true, 00:25:01.217 "num_base_bdevs": 2, 00:25:01.217 "num_base_bdevs_discovered": 2, 00:25:01.217 "num_base_bdevs_operational": 2, 00:25:01.217 "process": { 00:25:01.217 "type": "rebuild", 00:25:01.217 "target": "spare", 00:25:01.217 "progress": { 00:25:01.217 "blocks": 3072, 00:25:01.217 "percent": 38 00:25:01.217 } 00:25:01.217 }, 00:25:01.217 "base_bdevs_list": [ 00:25:01.217 { 00:25:01.217 "name": "spare", 00:25:01.217 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:01.217 "is_configured": true, 00:25:01.217 "data_offset": 256, 00:25:01.217 "data_size": 7936 00:25:01.217 }, 00:25:01.217 { 00:25:01.217 "name": "BaseBdev2", 00:25:01.217 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:01.217 "is_configured": true, 00:25:01.217 "data_offset": 256, 00:25:01.217 "data_size": 7936 00:25:01.217 } 00:25:01.217 ] 00:25:01.217 }' 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.217 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:01.475 [2024-05-15 03:19:32.567684] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:01.733 [2024-05-15 03:19:32.667651] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:01.733 [2024-05-15 03:19:32.667696] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.733 "name": "raid_bdev1", 00:25:01.733 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:01.733 "strip_size_kb": 0, 00:25:01.733 "state": "online", 00:25:01.733 "raid_level": "raid1", 00:25:01.733 "superblock": true, 00:25:01.733 "num_base_bdevs": 2, 00:25:01.733 "num_base_bdevs_discovered": 1, 00:25:01.733 "num_base_bdevs_operational": 1, 00:25:01.733 "base_bdevs_list": [ 00:25:01.733 { 00:25:01.733 "name": null, 00:25:01.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.733 "is_configured": false, 00:25:01.733 "data_offset": 256, 00:25:01.733 "data_size": 7936 00:25:01.733 }, 00:25:01.733 { 00:25:01.733 "name": "BaseBdev2", 00:25:01.733 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:01.733 "is_configured": true, 00:25:01.733 "data_offset": 256, 00:25:01.733 "data_size": 7936 00:25:01.733 } 00:25:01.733 ] 00:25:01.733 }' 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.733 03:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.668 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.669 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:02.669 "name": "raid_bdev1", 00:25:02.669 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:02.669 "strip_size_kb": 0, 00:25:02.669 "state": "online", 00:25:02.669 "raid_level": "raid1", 00:25:02.669 "superblock": true, 00:25:02.669 "num_base_bdevs": 2, 00:25:02.669 "num_base_bdevs_discovered": 1, 00:25:02.669 "num_base_bdevs_operational": 1, 00:25:02.669 "base_bdevs_list": [ 00:25:02.669 { 00:25:02.669 "name": null, 00:25:02.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.669 "is_configured": false, 00:25:02.669 "data_offset": 256, 00:25:02.669 "data_size": 7936 00:25:02.669 }, 00:25:02.669 { 00:25:02.669 "name": "BaseBdev2", 00:25:02.669 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:02.669 "is_configured": true, 00:25:02.669 "data_offset": 256, 00:25:02.669 "data_size": 7936 00:25:02.669 } 00:25:02.669 ] 00:25:02.669 }' 00:25:02.669 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:02.669 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:02.669 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.927 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:02.927 03:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:02.927 [2024-05-15 03:19:34.082626] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.927 [2024-05-15 03:19:34.084856] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa292f0 00:25:03.185 [2024-05-15 03:19:34.086363] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:03.185 03:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # sleep 1 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.119 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.377 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:04.377 "name": "raid_bdev1", 00:25:04.377 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:04.377 "strip_size_kb": 0, 00:25:04.377 "state": "online", 00:25:04.377 "raid_level": "raid1", 00:25:04.377 "superblock": true, 00:25:04.377 "num_base_bdevs": 2, 00:25:04.377 "num_base_bdevs_discovered": 2, 00:25:04.377 "num_base_bdevs_operational": 2, 00:25:04.377 "process": { 00:25:04.377 "type": "rebuild", 00:25:04.377 "target": "spare", 00:25:04.377 "progress": { 00:25:04.377 "blocks": 3072, 00:25:04.377 "percent": 38 00:25:04.377 } 00:25:04.377 }, 00:25:04.377 "base_bdevs_list": [ 00:25:04.377 { 00:25:04.377 "name": "spare", 00:25:04.377 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:04.378 "is_configured": true, 00:25:04.378 "data_offset": 256, 00:25:04.378 "data_size": 7936 00:25:04.378 }, 00:25:04.378 { 00:25:04.378 "name": "BaseBdev2", 00:25:04.378 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:04.378 "is_configured": true, 00:25:04.378 "data_offset": 256, 00:25:04.378 "data_size": 7936 00:25:04.378 } 00:25:04.378 ] 00:25:04.378 }' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:25:04.378 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # local timeout=952 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.378 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.636 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:04.636 "name": "raid_bdev1", 00:25:04.636 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:04.636 "strip_size_kb": 0, 00:25:04.636 "state": "online", 00:25:04.636 "raid_level": "raid1", 00:25:04.636 "superblock": true, 00:25:04.636 "num_base_bdevs": 2, 00:25:04.636 "num_base_bdevs_discovered": 2, 00:25:04.636 "num_base_bdevs_operational": 2, 00:25:04.636 "process": { 00:25:04.636 "type": "rebuild", 00:25:04.636 "target": "spare", 00:25:04.636 "progress": { 00:25:04.636 "blocks": 3840, 00:25:04.636 "percent": 48 00:25:04.636 } 00:25:04.636 }, 00:25:04.636 "base_bdevs_list": [ 00:25:04.636 { 00:25:04.636 "name": "spare", 00:25:04.636 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:04.636 "is_configured": true, 00:25:04.636 "data_offset": 256, 00:25:04.636 "data_size": 7936 00:25:04.636 }, 00:25:04.636 { 00:25:04.636 "name": "BaseBdev2", 00:25:04.636 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:04.636 "is_configured": true, 00:25:04.636 "data_offset": 256, 00:25:04.636 "data_size": 7936 00:25:04.636 } 00:25:04.636 ] 00:25:04.636 }' 00:25:04.636 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:04.636 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.636 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:04.894 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.894 03:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # sleep 1 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:05.827 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.828 03:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.086 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:06.086 "name": "raid_bdev1", 00:25:06.086 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:06.086 "strip_size_kb": 0, 00:25:06.086 "state": "online", 00:25:06.086 "raid_level": "raid1", 00:25:06.086 "superblock": true, 00:25:06.086 "num_base_bdevs": 2, 00:25:06.086 "num_base_bdevs_discovered": 2, 00:25:06.086 "num_base_bdevs_operational": 2, 00:25:06.086 "process": { 00:25:06.086 "type": "rebuild", 00:25:06.086 "target": "spare", 00:25:06.086 "progress": { 00:25:06.086 "blocks": 7424, 00:25:06.086 "percent": 93 00:25:06.086 } 00:25:06.086 }, 00:25:06.086 "base_bdevs_list": [ 00:25:06.086 { 00:25:06.086 "name": "spare", 00:25:06.086 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:06.086 "is_configured": true, 00:25:06.086 "data_offset": 256, 00:25:06.086 "data_size": 7936 00:25:06.086 }, 00:25:06.086 { 00:25:06.086 "name": "BaseBdev2", 00:25:06.086 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:06.086 "is_configured": true, 00:25:06.086 "data_offset": 256, 00:25:06.087 "data_size": 7936 00:25:06.087 } 00:25:06.087 ] 00:25:06.087 }' 00:25:06.087 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:06.087 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.087 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:06.087 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.087 03:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # sleep 1 00:25:06.087 [2024-05-15 03:19:37.210233] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:06.087 [2024-05-15 03:19:37.210290] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:06.087 [2024-05-15 03:19:37.210371] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.067 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.335 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:07.335 "name": "raid_bdev1", 00:25:07.335 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:07.335 "strip_size_kb": 0, 00:25:07.335 "state": "online", 00:25:07.335 "raid_level": "raid1", 00:25:07.335 "superblock": true, 00:25:07.335 "num_base_bdevs": 2, 00:25:07.335 "num_base_bdevs_discovered": 2, 00:25:07.335 "num_base_bdevs_operational": 2, 00:25:07.335 "base_bdevs_list": [ 00:25:07.335 { 00:25:07.335 "name": "spare", 00:25:07.335 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:07.335 "is_configured": true, 00:25:07.335 "data_offset": 256, 00:25:07.335 "data_size": 7936 00:25:07.335 }, 00:25:07.335 { 00:25:07.335 "name": "BaseBdev2", 00:25:07.335 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:07.335 "is_configured": true, 00:25:07.335 "data_offset": 256, 00:25:07.335 "data_size": 7936 00:25:07.335 } 00:25:07.335 ] 00:25:07.335 }' 00:25:07.335 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:07.335 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:07.335 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # break 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.593 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:07.851 "name": "raid_bdev1", 00:25:07.851 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:07.851 "strip_size_kb": 0, 00:25:07.851 "state": "online", 00:25:07.851 "raid_level": "raid1", 00:25:07.851 "superblock": true, 00:25:07.851 "num_base_bdevs": 2, 00:25:07.851 "num_base_bdevs_discovered": 2, 00:25:07.851 "num_base_bdevs_operational": 2, 00:25:07.851 "base_bdevs_list": [ 00:25:07.851 { 00:25:07.851 "name": "spare", 00:25:07.851 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:07.851 "is_configured": true, 00:25:07.851 "data_offset": 256, 00:25:07.851 "data_size": 7936 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "name": "BaseBdev2", 00:25:07.851 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:07.851 "is_configured": true, 00:25:07.851 "data_offset": 256, 00:25:07.851 "data_size": 7936 00:25:07.851 } 00:25:07.851 ] 00:25:07.851 }' 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.851 03:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.110 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.110 "name": "raid_bdev1", 00:25:08.110 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:08.110 "strip_size_kb": 0, 00:25:08.110 "state": "online", 00:25:08.110 "raid_level": "raid1", 00:25:08.110 "superblock": true, 00:25:08.110 "num_base_bdevs": 2, 00:25:08.110 "num_base_bdevs_discovered": 2, 00:25:08.110 "num_base_bdevs_operational": 2, 00:25:08.110 "base_bdevs_list": [ 00:25:08.110 { 00:25:08.110 "name": "spare", 00:25:08.110 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:08.110 "is_configured": true, 00:25:08.110 "data_offset": 256, 00:25:08.110 "data_size": 7936 00:25:08.110 }, 00:25:08.110 { 00:25:08.110 "name": "BaseBdev2", 00:25:08.110 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:08.110 "is_configured": true, 00:25:08.110 "data_offset": 256, 00:25:08.110 "data_size": 7936 00:25:08.110 } 00:25:08.110 ] 00:25:08.110 }' 00:25:08.110 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.110 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.677 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:08.936 [2024-05-15 03:19:39.956926] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:08.936 [2024-05-15 03:19:39.956956] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.936 [2024-05-15 03:19:39.957012] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.936 [2024-05-15 03:19:39.957069] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.936 [2024-05-15 03:19:39.957078] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbc24a0 name raid_bdev1, state offline 00:25:08.936 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.936 03:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@725 -- # jq length 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@727 -- # '[' true = true ']' 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@728 -- # '[' false = true ']' 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:09.195 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:09.453 /dev/nbd0 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:09.453 1+0 records in 00:25:09.453 1+0 records out 00:25:09.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201425 s, 20.3 MB/s 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:09.453 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:09.710 /dev/nbd1 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:09.710 1+0 records in 00:25:09.710 1+0 records out 00:25:09.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257381 s, 15.9 MB/s 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:09.710 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:09.968 03:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:10.226 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:25:10.484 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:10.742 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:11.006 [2024-05-15 03:19:41.912330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:11.006 [2024-05-15 03:19:41.912376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.006 [2024-05-15 03:19:41.912395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc38b0 00:25:11.006 [2024-05-15 03:19:41.912404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.006 [2024-05-15 03:19:41.913925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.006 [2024-05-15 03:19:41.913950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:11.006 [2024-05-15 03:19:41.913996] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:11.006 [2024-05-15 03:19:41.914019] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.006 BaseBdev1 00:25:11.006 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:25:11.006 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:25:11.006 03:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:11.268 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:11.268 [2024-05-15 03:19:42.421687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:11.268 [2024-05-15 03:19:42.421721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.268 [2024-05-15 03:19:42.421745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc2c20 00:25:11.268 [2024-05-15 03:19:42.421756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.268 [2024-05-15 03:19:42.421921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.268 [2024-05-15 03:19:42.421936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:11.268 [2024-05-15 03:19:42.421973] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:11.268 [2024-05-15 03:19:42.421981] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:11.268 [2024-05-15 03:19:42.421988] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:11.268 [2024-05-15 03:19:42.422000] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb2ea90 name raid_bdev1, state configuring 00:25:11.268 [2024-05-15 03:19:42.422026] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:11.268 BaseBdev2 00:25:11.526 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:11.787 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:11.787 [2024-05-15 03:19:42.935113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:11.787 [2024-05-15 03:19:42.935145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.787 [2024-05-15 03:19:42.935163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa28680 00:25:11.787 [2024-05-15 03:19:42.935172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.787 [2024-05-15 03:19:42.935353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.787 [2024-05-15 03:19:42.935367] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:11.787 [2024-05-15 03:19:42.935416] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:11.787 [2024-05-15 03:19:42.935430] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.787 spare 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.046 03:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.046 [2024-05-15 03:19:43.035754] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xbc3fb0 00:25:12.046 [2024-05-15 03:19:43.035769] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:12.046 [2024-05-15 03:19:43.035840] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb33bf0 00:25:12.046 [2024-05-15 03:19:43.035977] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbc3fb0 00:25:12.046 [2024-05-15 03:19:43.035986] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbc3fb0 00:25:12.046 [2024-05-15 03:19:43.036063] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.305 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.305 "name": "raid_bdev1", 00:25:12.305 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:12.305 "strip_size_kb": 0, 00:25:12.305 "state": "online", 00:25:12.305 "raid_level": "raid1", 00:25:12.305 "superblock": true, 00:25:12.305 "num_base_bdevs": 2, 00:25:12.305 "num_base_bdevs_discovered": 2, 00:25:12.305 "num_base_bdevs_operational": 2, 00:25:12.305 "base_bdevs_list": [ 00:25:12.305 { 00:25:12.306 "name": "spare", 00:25:12.306 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:12.306 "is_configured": true, 00:25:12.306 "data_offset": 256, 00:25:12.306 "data_size": 7936 00:25:12.306 }, 00:25:12.306 { 00:25:12.306 "name": "BaseBdev2", 00:25:12.306 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:12.306 "is_configured": true, 00:25:12.306 "data_offset": 256, 00:25:12.306 "data_size": 7936 00:25:12.306 } 00:25:12.306 ] 00:25:12.306 }' 00:25:12.306 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.306 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.871 03:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.131 "name": "raid_bdev1", 00:25:13.131 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:13.131 "strip_size_kb": 0, 00:25:13.131 "state": "online", 00:25:13.131 "raid_level": "raid1", 00:25:13.131 "superblock": true, 00:25:13.131 "num_base_bdevs": 2, 00:25:13.131 "num_base_bdevs_discovered": 2, 00:25:13.131 "num_base_bdevs_operational": 2, 00:25:13.131 "base_bdevs_list": [ 00:25:13.131 { 00:25:13.131 "name": "spare", 00:25:13.131 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:13.131 "is_configured": true, 00:25:13.131 "data_offset": 256, 00:25:13.131 "data_size": 7936 00:25:13.131 }, 00:25:13.131 { 00:25:13.131 "name": "BaseBdev2", 00:25:13.131 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:13.131 "is_configured": true, 00:25:13.131 "data_offset": 256, 00:25:13.131 "data_size": 7936 00:25:13.131 } 00:25:13.131 ] 00:25:13.131 }' 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.131 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:13.390 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.390 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:13.648 [2024-05-15 03:19:44.675914] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.649 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.907 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.907 "name": "raid_bdev1", 00:25:13.907 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:13.907 "strip_size_kb": 0, 00:25:13.907 "state": "online", 00:25:13.907 "raid_level": "raid1", 00:25:13.907 "superblock": true, 00:25:13.907 "num_base_bdevs": 2, 00:25:13.907 "num_base_bdevs_discovered": 1, 00:25:13.907 "num_base_bdevs_operational": 1, 00:25:13.907 "base_bdevs_list": [ 00:25:13.907 { 00:25:13.907 "name": null, 00:25:13.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.907 "is_configured": false, 00:25:13.907 "data_offset": 256, 00:25:13.907 "data_size": 7936 00:25:13.907 }, 00:25:13.907 { 00:25:13.907 "name": "BaseBdev2", 00:25:13.907 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:13.907 "is_configured": true, 00:25:13.907 "data_offset": 256, 00:25:13.907 "data_size": 7936 00:25:13.907 } 00:25:13.907 ] 00:25:13.907 }' 00:25:13.907 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.907 03:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.473 03:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:14.733 [2024-05-15 03:19:45.810957] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.733 [2024-05-15 03:19:45.811104] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:14.733 [2024-05-15 03:19:45.811118] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:14.733 [2024-05-15 03:19:45.811141] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.733 [2024-05-15 03:19:45.813247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb33bf0 00:25:14.733 [2024-05-15 03:19:45.814732] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.733 03:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # sleep 1 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.107 03:19:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.107 "name": "raid_bdev1", 00:25:16.107 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:16.107 "strip_size_kb": 0, 00:25:16.107 "state": "online", 00:25:16.107 "raid_level": "raid1", 00:25:16.107 "superblock": true, 00:25:16.107 "num_base_bdevs": 2, 00:25:16.107 "num_base_bdevs_discovered": 2, 00:25:16.107 "num_base_bdevs_operational": 2, 00:25:16.107 "process": { 00:25:16.107 "type": "rebuild", 00:25:16.107 "target": "spare", 00:25:16.107 "progress": { 00:25:16.107 "blocks": 3072, 00:25:16.107 "percent": 38 00:25:16.107 } 00:25:16.107 }, 00:25:16.107 "base_bdevs_list": [ 00:25:16.107 { 00:25:16.107 "name": "spare", 00:25:16.107 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:16.107 "is_configured": true, 00:25:16.107 "data_offset": 256, 00:25:16.107 "data_size": 7936 00:25:16.107 }, 00:25:16.107 { 00:25:16.107 "name": "BaseBdev2", 00:25:16.107 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:16.107 "is_configured": true, 00:25:16.107 "data_offset": 256, 00:25:16.107 "data_size": 7936 00:25:16.107 } 00:25:16.107 ] 00:25:16.107 }' 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.107 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:16.365 [2024-05-15 03:19:47.416382] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.365 [2024-05-15 03:19:47.427132] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.365 [2024-05-15 03:19:47.427170] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:16.365 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.366 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.366 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.366 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.366 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.366 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.624 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.624 "name": "raid_bdev1", 00:25:16.624 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:16.624 "strip_size_kb": 0, 00:25:16.624 "state": "online", 00:25:16.624 "raid_level": "raid1", 00:25:16.624 "superblock": true, 00:25:16.624 "num_base_bdevs": 2, 00:25:16.624 "num_base_bdevs_discovered": 1, 00:25:16.624 "num_base_bdevs_operational": 1, 00:25:16.624 "base_bdevs_list": [ 00:25:16.624 { 00:25:16.624 "name": null, 00:25:16.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.624 "is_configured": false, 00:25:16.624 "data_offset": 256, 00:25:16.624 "data_size": 7936 00:25:16.624 }, 00:25:16.624 { 00:25:16.624 "name": "BaseBdev2", 00:25:16.624 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:16.624 "is_configured": true, 00:25:16.624 "data_offset": 256, 00:25:16.624 "data_size": 7936 00:25:16.624 } 00:25:16.624 ] 00:25:16.624 }' 00:25:16.624 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.624 03:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:17.192 03:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:17.451 [2024-05-15 03:19:48.581177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:17.451 [2024-05-15 03:19:48.581226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.451 [2024-05-15 03:19:48.581249] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc48b0 00:25:17.451 [2024-05-15 03:19:48.581258] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.451 [2024-05-15 03:19:48.581468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.451 [2024-05-15 03:19:48.581482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:17.451 [2024-05-15 03:19:48.581536] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:17.451 [2024-05-15 03:19:48.581546] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:17.451 [2024-05-15 03:19:48.581553] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:17.451 [2024-05-15 03:19:48.581567] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:17.451 [2024-05-15 03:19:48.583682] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbc37c0 00:25:17.451 [2024-05-15 03:19:48.585117] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:17.451 spare 00:25:17.451 03:19:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.829 "name": "raid_bdev1", 00:25:18.829 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:18.829 "strip_size_kb": 0, 00:25:18.829 "state": "online", 00:25:18.829 "raid_level": "raid1", 00:25:18.829 "superblock": true, 00:25:18.829 "num_base_bdevs": 2, 00:25:18.829 "num_base_bdevs_discovered": 2, 00:25:18.829 "num_base_bdevs_operational": 2, 00:25:18.829 "process": { 00:25:18.829 "type": "rebuild", 00:25:18.829 "target": "spare", 00:25:18.829 "progress": { 00:25:18.829 "blocks": 3072, 00:25:18.829 "percent": 38 00:25:18.829 } 00:25:18.829 }, 00:25:18.829 "base_bdevs_list": [ 00:25:18.829 { 00:25:18.829 "name": "spare", 00:25:18.829 "uuid": "fbde4d8d-6fc5-5fb3-b9b9-20c55fcc2752", 00:25:18.829 "is_configured": true, 00:25:18.829 "data_offset": 256, 00:25:18.829 "data_size": 7936 00:25:18.829 }, 00:25:18.829 { 00:25:18.829 "name": "BaseBdev2", 00:25:18.829 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:18.829 "is_configured": true, 00:25:18.829 "data_offset": 256, 00:25:18.829 "data_size": 7936 00:25:18.829 } 00:25:18.829 ] 00:25:18.829 }' 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.829 03:19:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:19.089 [2024-05-15 03:19:50.198345] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.347 [2024-05-15 03:19:50.298250] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:19.347 [2024-05-15 03:19:50.298292] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:19.347 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.348 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.348 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.348 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.348 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.348 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.607 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.607 "name": "raid_bdev1", 00:25:19.607 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:19.607 "strip_size_kb": 0, 00:25:19.607 "state": "online", 00:25:19.607 "raid_level": "raid1", 00:25:19.607 "superblock": true, 00:25:19.607 "num_base_bdevs": 2, 00:25:19.607 "num_base_bdevs_discovered": 1, 00:25:19.607 "num_base_bdevs_operational": 1, 00:25:19.607 "base_bdevs_list": [ 00:25:19.607 { 00:25:19.607 "name": null, 00:25:19.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.607 "is_configured": false, 00:25:19.607 "data_offset": 256, 00:25:19.607 "data_size": 7936 00:25:19.607 }, 00:25:19.607 { 00:25:19.607 "name": "BaseBdev2", 00:25:19.607 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:19.607 "is_configured": true, 00:25:19.607 "data_offset": 256, 00:25:19.607 "data_size": 7936 00:25:19.607 } 00:25:19.607 ] 00:25:19.607 }' 00:25:19.607 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.607 03:19:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.175 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.434 "name": "raid_bdev1", 00:25:20.434 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:20.434 "strip_size_kb": 0, 00:25:20.434 "state": "online", 00:25:20.434 "raid_level": "raid1", 00:25:20.434 "superblock": true, 00:25:20.434 "num_base_bdevs": 2, 00:25:20.434 "num_base_bdevs_discovered": 1, 00:25:20.434 "num_base_bdevs_operational": 1, 00:25:20.434 "base_bdevs_list": [ 00:25:20.434 { 00:25:20.434 "name": null, 00:25:20.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.434 "is_configured": false, 00:25:20.434 "data_offset": 256, 00:25:20.434 "data_size": 7936 00:25:20.434 }, 00:25:20.434 { 00:25:20.434 "name": "BaseBdev2", 00:25:20.434 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:20.434 "is_configured": true, 00:25:20.434 "data_offset": 256, 00:25:20.434 "data_size": 7936 00:25:20.434 } 00:25:20.434 ] 00:25:20.434 }' 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:20.434 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:20.693 03:19:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:20.952 [2024-05-15 03:19:52.025981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:20.952 [2024-05-15 03:19:52.026022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.952 [2024-05-15 03:19:52.026041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc09d0 00:25:20.952 [2024-05-15 03:19:52.026050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.952 [2024-05-15 03:19:52.026233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.952 [2024-05-15 03:19:52.026246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:20.952 [2024-05-15 03:19:52.026288] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:20.952 [2024-05-15 03:19:52.026298] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:20.952 [2024-05-15 03:19:52.026305] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:20.952 BaseBdev1 00:25:20.952 03:19:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # sleep 1 00:25:22.328 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:22.329 "name": "raid_bdev1", 00:25:22.329 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:22.329 "strip_size_kb": 0, 00:25:22.329 "state": "online", 00:25:22.329 "raid_level": "raid1", 00:25:22.329 "superblock": true, 00:25:22.329 "num_base_bdevs": 2, 00:25:22.329 "num_base_bdevs_discovered": 1, 00:25:22.329 "num_base_bdevs_operational": 1, 00:25:22.329 "base_bdevs_list": [ 00:25:22.329 { 00:25:22.329 "name": null, 00:25:22.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.329 "is_configured": false, 00:25:22.329 "data_offset": 256, 00:25:22.329 "data_size": 7936 00:25:22.329 }, 00:25:22.329 { 00:25:22.329 "name": "BaseBdev2", 00:25:22.329 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:22.329 "is_configured": true, 00:25:22.329 "data_offset": 256, 00:25:22.329 "data_size": 7936 00:25:22.329 } 00:25:22.329 ] 00:25:22.329 }' 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:22.329 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.894 03:19:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.152 "name": "raid_bdev1", 00:25:23.152 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:23.152 "strip_size_kb": 0, 00:25:23.152 "state": "online", 00:25:23.152 "raid_level": "raid1", 00:25:23.152 "superblock": true, 00:25:23.152 "num_base_bdevs": 2, 00:25:23.152 "num_base_bdevs_discovered": 1, 00:25:23.152 "num_base_bdevs_operational": 1, 00:25:23.152 "base_bdevs_list": [ 00:25:23.152 { 00:25:23.152 "name": null, 00:25:23.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.152 "is_configured": false, 00:25:23.152 "data_offset": 256, 00:25:23.152 "data_size": 7936 00:25:23.152 }, 00:25:23.152 { 00:25:23.152 "name": "BaseBdev2", 00:25:23.152 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:23.152 "is_configured": true, 00:25:23.152 "data_offset": 256, 00:25:23.152 "data_size": 7936 00:25:23.152 } 00:25:23.152 ] 00:25:23.152 }' 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:23.152 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:23.153 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.411 [2024-05-15 03:19:54.524680] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.411 [2024-05-15 03:19:54.524796] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:23.411 [2024-05-15 03:19:54.524809] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:23.411 request: 00:25:23.411 { 00:25:23.411 "raid_bdev": "raid_bdev1", 00:25:23.411 "base_bdev": "BaseBdev1", 00:25:23.411 "method": "bdev_raid_add_base_bdev", 00:25:23.411 "req_id": 1 00:25:23.411 } 00:25:23.411 Got JSON-RPC error response 00:25:23.411 response: 00:25:23.411 { 00:25:23.411 "code": -22, 00:25:23.411 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:23.411 } 00:25:23.411 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:25:23.411 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:23.411 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:23.411 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:23.411 03:19:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # sleep 1 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.843 "name": "raid_bdev1", 00:25:24.843 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:24.843 "strip_size_kb": 0, 00:25:24.843 "state": "online", 00:25:24.843 "raid_level": "raid1", 00:25:24.843 "superblock": true, 00:25:24.843 "num_base_bdevs": 2, 00:25:24.843 "num_base_bdevs_discovered": 1, 00:25:24.843 "num_base_bdevs_operational": 1, 00:25:24.843 "base_bdevs_list": [ 00:25:24.843 { 00:25:24.843 "name": null, 00:25:24.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.843 "is_configured": false, 00:25:24.843 "data_offset": 256, 00:25:24.843 "data_size": 7936 00:25:24.843 }, 00:25:24.843 { 00:25:24.843 "name": "BaseBdev2", 00:25:24.843 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:24.843 "is_configured": true, 00:25:24.843 "data_offset": 256, 00:25:24.843 "data_size": 7936 00:25:24.843 } 00:25:24.843 ] 00:25:24.843 }' 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.843 03:19:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.410 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:25.669 "name": "raid_bdev1", 00:25:25.669 "uuid": "10649e01-5ada-4848-902a-c7f44ba6d25e", 00:25:25.669 "strip_size_kb": 0, 00:25:25.669 "state": "online", 00:25:25.669 "raid_level": "raid1", 00:25:25.669 "superblock": true, 00:25:25.669 "num_base_bdevs": 2, 00:25:25.669 "num_base_bdevs_discovered": 1, 00:25:25.669 "num_base_bdevs_operational": 1, 00:25:25.669 "base_bdevs_list": [ 00:25:25.669 { 00:25:25.669 "name": null, 00:25:25.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.669 "is_configured": false, 00:25:25.669 "data_offset": 256, 00:25:25.669 "data_size": 7936 00:25:25.669 }, 00:25:25.669 { 00:25:25.669 "name": "BaseBdev2", 00:25:25.669 "uuid": "590e2d87-593e-5561-8fc5-ee080c1318fd", 00:25:25.669 "is_configured": true, 00:25:25.669 "data_offset": 256, 00:25:25.669 "data_size": 7936 00:25:25.669 } 00:25:25.669 ] 00:25:25.669 }' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # killprocess 12882 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 12882 ']' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 12882 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:25.669 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 12882 00:25:25.927 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:25.927 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:25.927 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 12882' 00:25:25.927 killing process with pid 12882 00:25:25.927 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 12882 00:25:25.927 Received shutdown signal, test time was about 60.000000 seconds 00:25:25.927 00:25:25.927 Latency(us) 00:25:25.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.927 =================================================================================================================== 00:25:25.927 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:25.927 [2024-05-15 03:19:56.839731] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.927 [2024-05-15 03:19:56.839823] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.927 [2024-05-15 03:19:56.839880] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.927 [2024-05-15 03:19:56.839890] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbc3fb0 name raid_bdev1, state offline 00:25:25.927 03:19:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 12882 00:25:25.927 [2024-05-15 03:19:56.871266] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:26.186 03:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@797 -- # return 0 00:25:26.186 00:25:26.186 real 0m33.248s 00:25:26.186 user 0m53.308s 00:25:26.186 sys 0m4.375s 00:25:26.186 03:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:26.186 03:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:26.186 ************************************ 00:25:26.186 END TEST raid_rebuild_test_sb_md_separate 00:25:26.186 ************************************ 00:25:26.186 03:19:57 bdev_raid -- bdev/bdev_raid.sh@857 -- # base_malloc_params='-m 32 -i' 00:25:26.186 03:19:57 bdev_raid -- bdev/bdev_raid.sh@858 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:26.186 03:19:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:26.186 03:19:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:26.186 03:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:26.186 ************************************ 00:25:26.186 START TEST raid_state_function_test_sb_md_interleaved 00:25:26.186 ************************************ 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=18786 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 18786' 00:25:26.186 Process raid pid: 18786 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 18786 /var/tmp/spdk-raid.sock 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 18786 ']' 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:26.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:26.186 03:19:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.186 [2024-05-15 03:19:57.232005] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:25:26.186 [2024-05-15 03:19:57.232059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.186 [2024-05-15 03:19:57.333087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.445 [2024-05-15 03:19:57.427502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.445 [2024-05-15 03:19:57.485887] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.445 [2024-05-15 03:19:57.485914] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.012 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:27.012 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:25:27.012 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:27.270 [2024-05-15 03:19:58.328422] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:27.270 [2024-05-15 03:19:58.328462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:27.270 [2024-05-15 03:19:58.328471] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:27.270 [2024-05-15 03:19:58.328480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.270 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.529 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.529 "name": "Existed_Raid", 00:25:27.529 "uuid": "784c0392-8d88-47ff-9aaf-319113a638ad", 00:25:27.529 "strip_size_kb": 0, 00:25:27.529 "state": "configuring", 00:25:27.529 "raid_level": "raid1", 00:25:27.529 "superblock": true, 00:25:27.529 "num_base_bdevs": 2, 00:25:27.529 "num_base_bdevs_discovered": 0, 00:25:27.529 "num_base_bdevs_operational": 2, 00:25:27.529 "base_bdevs_list": [ 00:25:27.529 { 00:25:27.529 "name": "BaseBdev1", 00:25:27.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.529 "is_configured": false, 00:25:27.529 "data_offset": 0, 00:25:27.529 "data_size": 0 00:25:27.529 }, 00:25:27.529 { 00:25:27.529 "name": "BaseBdev2", 00:25:27.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.529 "is_configured": false, 00:25:27.529 "data_offset": 0, 00:25:27.529 "data_size": 0 00:25:27.529 } 00:25:27.529 ] 00:25:27.529 }' 00:25:27.529 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.529 03:19:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.096 03:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:28.354 [2024-05-15 03:19:59.467308] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:28.354 [2024-05-15 03:19:59.467339] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f7dc0 name Existed_Raid, state configuring 00:25:28.354 03:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:28.613 [2024-05-15 03:19:59.719993] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.613 [2024-05-15 03:19:59.720021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.613 [2024-05-15 03:19:59.720029] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.613 [2024-05-15 03:19:59.720038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.613 03:19:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:28.871 [2024-05-15 03:19:59.982320] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.871 BaseBdev1 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:28.871 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.129 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:29.387 [ 00:25:29.387 { 00:25:29.387 "name": "BaseBdev1", 00:25:29.387 "aliases": [ 00:25:29.387 "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa" 00:25:29.387 ], 00:25:29.387 "product_name": "Malloc disk", 00:25:29.387 "block_size": 4128, 00:25:29.387 "num_blocks": 8192, 00:25:29.387 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:29.387 "md_size": 32, 00:25:29.387 "md_interleave": true, 00:25:29.387 "dif_type": 0, 00:25:29.387 "assigned_rate_limits": { 00:25:29.387 "rw_ios_per_sec": 0, 00:25:29.387 "rw_mbytes_per_sec": 0, 00:25:29.387 "r_mbytes_per_sec": 0, 00:25:29.387 "w_mbytes_per_sec": 0 00:25:29.387 }, 00:25:29.387 "claimed": true, 00:25:29.387 "claim_type": "exclusive_write", 00:25:29.387 "zoned": false, 00:25:29.387 "supported_io_types": { 00:25:29.387 "read": true, 00:25:29.388 "write": true, 00:25:29.388 "unmap": true, 00:25:29.388 "write_zeroes": true, 00:25:29.388 "flush": true, 00:25:29.388 "reset": true, 00:25:29.388 "compare": false, 00:25:29.388 "compare_and_write": false, 00:25:29.388 "abort": true, 00:25:29.388 "nvme_admin": false, 00:25:29.388 "nvme_io": false 00:25:29.388 }, 00:25:29.388 "memory_domains": [ 00:25:29.388 { 00:25:29.388 "dma_device_id": "system", 00:25:29.388 "dma_device_type": 1 00:25:29.388 }, 00:25:29.388 { 00:25:29.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.388 "dma_device_type": 2 00:25:29.388 } 00:25:29.388 ], 00:25:29.388 "driver_specific": {} 00:25:29.388 } 00:25:29.388 ] 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.388 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.646 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.646 "name": "Existed_Raid", 00:25:29.646 "uuid": "4dff70c0-ad3f-4eed-9e9c-3bfaa5c57a3b", 00:25:29.646 "strip_size_kb": 0, 00:25:29.646 "state": "configuring", 00:25:29.646 "raid_level": "raid1", 00:25:29.646 "superblock": true, 00:25:29.646 "num_base_bdevs": 2, 00:25:29.646 "num_base_bdevs_discovered": 1, 00:25:29.646 "num_base_bdevs_operational": 2, 00:25:29.646 "base_bdevs_list": [ 00:25:29.646 { 00:25:29.646 "name": "BaseBdev1", 00:25:29.646 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:29.646 "is_configured": true, 00:25:29.646 "data_offset": 256, 00:25:29.646 "data_size": 7936 00:25:29.646 }, 00:25:29.646 { 00:25:29.646 "name": "BaseBdev2", 00:25:29.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.646 "is_configured": false, 00:25:29.646 "data_offset": 0, 00:25:29.646 "data_size": 0 00:25:29.646 } 00:25:29.646 ] 00:25:29.646 }' 00:25:29.646 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.646 03:20:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.580 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:30.580 [2024-05-15 03:20:01.618724] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:30.580 [2024-05-15 03:20:01.618761] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f8060 name Existed_Raid, state configuring 00:25:30.580 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:30.839 [2024-05-15 03:20:01.871442] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:30.839 [2024-05-15 03:20:01.872969] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:30.839 [2024-05-15 03:20:01.873000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.839 03:20:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.097 03:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.097 "name": "Existed_Raid", 00:25:31.097 "uuid": "67df6777-c927-4b4b-ad93-1ddad46aedc6", 00:25:31.097 "strip_size_kb": 0, 00:25:31.097 "state": "configuring", 00:25:31.097 "raid_level": "raid1", 00:25:31.097 "superblock": true, 00:25:31.097 "num_base_bdevs": 2, 00:25:31.097 "num_base_bdevs_discovered": 1, 00:25:31.097 "num_base_bdevs_operational": 2, 00:25:31.097 "base_bdevs_list": [ 00:25:31.097 { 00:25:31.097 "name": "BaseBdev1", 00:25:31.097 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:31.097 "is_configured": true, 00:25:31.097 "data_offset": 256, 00:25:31.097 "data_size": 7936 00:25:31.097 }, 00:25:31.097 { 00:25:31.097 "name": "BaseBdev2", 00:25:31.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.097 "is_configured": false, 00:25:31.097 "data_offset": 0, 00:25:31.097 "data_size": 0 00:25:31.097 } 00:25:31.097 ] 00:25:31.097 }' 00:25:31.097 03:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.097 03:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.661 03:20:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:31.919 [2024-05-15 03:20:03.014048] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.919 [2024-05-15 03:20:03.014186] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x20f76b0 00:25:31.919 [2024-05-15 03:20:03.014203] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:31.919 [2024-05-15 03:20:03.014266] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2294eb0 00:25:31.919 [2024-05-15 03:20:03.014342] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20f76b0 00:25:31.919 [2024-05-15 03:20:03.014350] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20f76b0 00:25:31.919 [2024-05-15 03:20:03.014404] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.919 BaseBdev2 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:31.919 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:32.177 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:32.444 [ 00:25:32.444 { 00:25:32.444 "name": "BaseBdev2", 00:25:32.444 "aliases": [ 00:25:32.444 "4dec9065-fcfa-424b-81ba-9c098cf73ed1" 00:25:32.444 ], 00:25:32.444 "product_name": "Malloc disk", 00:25:32.444 "block_size": 4128, 00:25:32.444 "num_blocks": 8192, 00:25:32.444 "uuid": "4dec9065-fcfa-424b-81ba-9c098cf73ed1", 00:25:32.444 "md_size": 32, 00:25:32.444 "md_interleave": true, 00:25:32.444 "dif_type": 0, 00:25:32.444 "assigned_rate_limits": { 00:25:32.444 "rw_ios_per_sec": 0, 00:25:32.444 "rw_mbytes_per_sec": 0, 00:25:32.444 "r_mbytes_per_sec": 0, 00:25:32.444 "w_mbytes_per_sec": 0 00:25:32.444 }, 00:25:32.444 "claimed": true, 00:25:32.444 "claim_type": "exclusive_write", 00:25:32.444 "zoned": false, 00:25:32.444 "supported_io_types": { 00:25:32.444 "read": true, 00:25:32.444 "write": true, 00:25:32.444 "unmap": true, 00:25:32.444 "write_zeroes": true, 00:25:32.444 "flush": true, 00:25:32.444 "reset": true, 00:25:32.444 "compare": false, 00:25:32.444 "compare_and_write": false, 00:25:32.444 "abort": true, 00:25:32.444 "nvme_admin": false, 00:25:32.444 "nvme_io": false 00:25:32.444 }, 00:25:32.444 "memory_domains": [ 00:25:32.444 { 00:25:32.444 "dma_device_id": "system", 00:25:32.444 "dma_device_type": 1 00:25:32.444 }, 00:25:32.444 { 00:25:32.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.444 "dma_device_type": 2 00:25:32.444 } 00:25:32.444 ], 00:25:32.444 "driver_specific": {} 00:25:32.444 } 00:25:32.444 ] 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:32.444 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.445 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.445 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.445 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.445 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.445 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.710 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:32.710 "name": "Existed_Raid", 00:25:32.710 "uuid": "67df6777-c927-4b4b-ad93-1ddad46aedc6", 00:25:32.710 "strip_size_kb": 0, 00:25:32.710 "state": "online", 00:25:32.710 "raid_level": "raid1", 00:25:32.710 "superblock": true, 00:25:32.710 "num_base_bdevs": 2, 00:25:32.710 "num_base_bdevs_discovered": 2, 00:25:32.710 "num_base_bdevs_operational": 2, 00:25:32.710 "base_bdevs_list": [ 00:25:32.710 { 00:25:32.710 "name": "BaseBdev1", 00:25:32.710 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:32.710 "is_configured": true, 00:25:32.710 "data_offset": 256, 00:25:32.710 "data_size": 7936 00:25:32.710 }, 00:25:32.710 { 00:25:32.710 "name": "BaseBdev2", 00:25:32.710 "uuid": "4dec9065-fcfa-424b-81ba-9c098cf73ed1", 00:25:32.710 "is_configured": true, 00:25:32.710 "data_offset": 256, 00:25:32.710 "data_size": 7936 00:25:32.710 } 00:25:32.710 ] 00:25:32.710 }' 00:25:32.710 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:32.710 03:20:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:33.275 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:33.533 [2024-05-15 03:20:04.642700] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.533 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:33.533 "name": "Existed_Raid", 00:25:33.533 "aliases": [ 00:25:33.533 "67df6777-c927-4b4b-ad93-1ddad46aedc6" 00:25:33.533 ], 00:25:33.533 "product_name": "Raid Volume", 00:25:33.533 "block_size": 4128, 00:25:33.533 "num_blocks": 7936, 00:25:33.533 "uuid": "67df6777-c927-4b4b-ad93-1ddad46aedc6", 00:25:33.533 "md_size": 32, 00:25:33.533 "md_interleave": true, 00:25:33.533 "dif_type": 0, 00:25:33.533 "assigned_rate_limits": { 00:25:33.533 "rw_ios_per_sec": 0, 00:25:33.533 "rw_mbytes_per_sec": 0, 00:25:33.533 "r_mbytes_per_sec": 0, 00:25:33.533 "w_mbytes_per_sec": 0 00:25:33.533 }, 00:25:33.533 "claimed": false, 00:25:33.533 "zoned": false, 00:25:33.533 "supported_io_types": { 00:25:33.533 "read": true, 00:25:33.533 "write": true, 00:25:33.533 "unmap": false, 00:25:33.533 "write_zeroes": true, 00:25:33.533 "flush": false, 00:25:33.533 "reset": true, 00:25:33.533 "compare": false, 00:25:33.533 "compare_and_write": false, 00:25:33.533 "abort": false, 00:25:33.533 "nvme_admin": false, 00:25:33.533 "nvme_io": false 00:25:33.533 }, 00:25:33.533 "memory_domains": [ 00:25:33.533 { 00:25:33.533 "dma_device_id": "system", 00:25:33.533 "dma_device_type": 1 00:25:33.533 }, 00:25:33.533 { 00:25:33.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.533 "dma_device_type": 2 00:25:33.533 }, 00:25:33.533 { 00:25:33.533 "dma_device_id": "system", 00:25:33.533 "dma_device_type": 1 00:25:33.533 }, 00:25:33.533 { 00:25:33.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.533 "dma_device_type": 2 00:25:33.533 } 00:25:33.533 ], 00:25:33.533 "driver_specific": { 00:25:33.533 "raid": { 00:25:33.533 "uuid": "67df6777-c927-4b4b-ad93-1ddad46aedc6", 00:25:33.533 "strip_size_kb": 0, 00:25:33.533 "state": "online", 00:25:33.533 "raid_level": "raid1", 00:25:33.533 "superblock": true, 00:25:33.533 "num_base_bdevs": 2, 00:25:33.533 "num_base_bdevs_discovered": 2, 00:25:33.533 "num_base_bdevs_operational": 2, 00:25:33.533 "base_bdevs_list": [ 00:25:33.533 { 00:25:33.533 "name": "BaseBdev1", 00:25:33.533 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:33.533 "is_configured": true, 00:25:33.533 "data_offset": 256, 00:25:33.533 "data_size": 7936 00:25:33.533 }, 00:25:33.533 { 00:25:33.533 "name": "BaseBdev2", 00:25:33.533 "uuid": "4dec9065-fcfa-424b-81ba-9c098cf73ed1", 00:25:33.533 "is_configured": true, 00:25:33.533 "data_offset": 256, 00:25:33.533 "data_size": 7936 00:25:33.533 } 00:25:33.533 ] 00:25:33.533 } 00:25:33.533 } 00:25:33.533 }' 00:25:33.533 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:33.791 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:25:33.791 BaseBdev2' 00:25:33.791 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:33.791 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:33.791 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.049 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.049 "name": "BaseBdev1", 00:25:34.049 "aliases": [ 00:25:34.049 "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa" 00:25:34.049 ], 00:25:34.049 "product_name": "Malloc disk", 00:25:34.049 "block_size": 4128, 00:25:34.049 "num_blocks": 8192, 00:25:34.049 "uuid": "34d41a30-2d9d-45b7-aabd-1e543ed0bbfa", 00:25:34.049 "md_size": 32, 00:25:34.049 "md_interleave": true, 00:25:34.049 "dif_type": 0, 00:25:34.049 "assigned_rate_limits": { 00:25:34.049 "rw_ios_per_sec": 0, 00:25:34.049 "rw_mbytes_per_sec": 0, 00:25:34.049 "r_mbytes_per_sec": 0, 00:25:34.049 "w_mbytes_per_sec": 0 00:25:34.049 }, 00:25:34.049 "claimed": true, 00:25:34.049 "claim_type": "exclusive_write", 00:25:34.049 "zoned": false, 00:25:34.049 "supported_io_types": { 00:25:34.049 "read": true, 00:25:34.049 "write": true, 00:25:34.049 "unmap": true, 00:25:34.049 "write_zeroes": true, 00:25:34.049 "flush": true, 00:25:34.049 "reset": true, 00:25:34.049 "compare": false, 00:25:34.049 "compare_and_write": false, 00:25:34.049 "abort": true, 00:25:34.049 "nvme_admin": false, 00:25:34.049 "nvme_io": false 00:25:34.049 }, 00:25:34.049 "memory_domains": [ 00:25:34.049 { 00:25:34.049 "dma_device_id": "system", 00:25:34.049 "dma_device_type": 1 00:25:34.049 }, 00:25:34.049 { 00:25:34.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.049 "dma_device_type": 2 00:25:34.049 } 00:25:34.049 ], 00:25:34.049 "driver_specific": {} 00:25:34.049 }' 00:25:34.049 03:20:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.050 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:34.308 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.567 "name": "BaseBdev2", 00:25:34.567 "aliases": [ 00:25:34.567 "4dec9065-fcfa-424b-81ba-9c098cf73ed1" 00:25:34.567 ], 00:25:34.567 "product_name": "Malloc disk", 00:25:34.567 "block_size": 4128, 00:25:34.567 "num_blocks": 8192, 00:25:34.567 "uuid": "4dec9065-fcfa-424b-81ba-9c098cf73ed1", 00:25:34.567 "md_size": 32, 00:25:34.567 "md_interleave": true, 00:25:34.567 "dif_type": 0, 00:25:34.567 "assigned_rate_limits": { 00:25:34.567 "rw_ios_per_sec": 0, 00:25:34.567 "rw_mbytes_per_sec": 0, 00:25:34.567 "r_mbytes_per_sec": 0, 00:25:34.567 "w_mbytes_per_sec": 0 00:25:34.567 }, 00:25:34.567 "claimed": true, 00:25:34.567 "claim_type": "exclusive_write", 00:25:34.567 "zoned": false, 00:25:34.567 "supported_io_types": { 00:25:34.567 "read": true, 00:25:34.567 "write": true, 00:25:34.567 "unmap": true, 00:25:34.567 "write_zeroes": true, 00:25:34.567 "flush": true, 00:25:34.567 "reset": true, 00:25:34.567 "compare": false, 00:25:34.567 "compare_and_write": false, 00:25:34.567 "abort": true, 00:25:34.567 "nvme_admin": false, 00:25:34.567 "nvme_io": false 00:25:34.567 }, 00:25:34.567 "memory_domains": [ 00:25:34.567 { 00:25:34.567 "dma_device_id": "system", 00:25:34.567 "dma_device_type": 1 00:25:34.567 }, 00:25:34.567 { 00:25:34.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.567 "dma_device_type": 2 00:25:34.567 } 00:25:34.567 ], 00:25:34.567 "driver_specific": {} 00:25:34.567 }' 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.567 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:34.825 03:20:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:35.084 [2024-05-15 03:20:06.182659] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.084 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.342 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.342 "name": "Existed_Raid", 00:25:35.342 "uuid": "67df6777-c927-4b4b-ad93-1ddad46aedc6", 00:25:35.342 "strip_size_kb": 0, 00:25:35.342 "state": "online", 00:25:35.342 "raid_level": "raid1", 00:25:35.342 "superblock": true, 00:25:35.342 "num_base_bdevs": 2, 00:25:35.342 "num_base_bdevs_discovered": 1, 00:25:35.342 "num_base_bdevs_operational": 1, 00:25:35.342 "base_bdevs_list": [ 00:25:35.342 { 00:25:35.342 "name": null, 00:25:35.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.342 "is_configured": false, 00:25:35.342 "data_offset": 256, 00:25:35.342 "data_size": 7936 00:25:35.342 }, 00:25:35.342 { 00:25:35.342 "name": "BaseBdev2", 00:25:35.342 "uuid": "4dec9065-fcfa-424b-81ba-9c098cf73ed1", 00:25:35.342 "is_configured": true, 00:25:35.342 "data_offset": 256, 00:25:35.342 "data_size": 7936 00:25:35.342 } 00:25:35.342 ] 00:25:35.342 }' 00:25:35.342 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.342 03:20:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:36.278 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:36.536 [2024-05-15 03:20:07.563626] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:36.536 [2024-05-15 03:20:07.563708] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.536 [2024-05-15 03:20:07.574784] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.536 [2024-05-15 03:20:07.574858] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.536 [2024-05-15 03:20:07.574871] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f76b0 name Existed_Raid, state offline 00:25:36.536 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:36.536 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.536 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:36.536 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 18786 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 18786 ']' 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 18786 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 18786 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 18786' 00:25:36.795 killing process with pid 18786 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 18786 00:25:36.795 [2024-05-15 03:20:07.899778] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:36.795 03:20:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 18786 00:25:36.795 [2024-05-15 03:20:07.900673] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:37.054 03:20:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:25:37.054 00:25:37.054 real 0m10.958s 00:25:37.054 user 0m19.968s 00:25:37.054 sys 0m1.571s 00:25:37.054 03:20:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:37.054 03:20:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.054 ************************************ 00:25:37.054 END TEST raid_state_function_test_sb_md_interleaved 00:25:37.054 ************************************ 00:25:37.054 03:20:08 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:37.054 03:20:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:37.054 03:20:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:37.054 03:20:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.054 ************************************ 00:25:37.054 START TEST raid_superblock_test_md_interleaved 00:25:37.054 ************************************ 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=20810 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 20810 /var/tmp/spdk-raid.sock 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 20810 ']' 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:37.054 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.055 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:37.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:37.314 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.314 03:20:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.314 [2024-05-15 03:20:08.263877] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:25:37.314 [2024-05-15 03:20:08.263930] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20810 ] 00:25:37.314 [2024-05-15 03:20:08.360756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.314 [2024-05-15 03:20:08.454613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.573 [2024-05-15 03:20:08.518070] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:37.573 [2024-05-15 03:20:08.518102] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:38.140 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:38.398 malloc1 00:25:38.398 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:38.657 [2024-05-15 03:20:09.704540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:38.657 [2024-05-15 03:20:09.704584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.657 [2024-05-15 03:20:09.704606] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0b8e0 00:25:38.657 [2024-05-15 03:20:09.704620] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.657 [2024-05-15 03:20:09.706133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.657 [2024-05-15 03:20:09.706158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:38.657 pt1 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:38.657 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:38.916 malloc2 00:25:38.916 03:20:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:39.174 [2024-05-15 03:20:10.210732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:39.175 [2024-05-15 03:20:10.210777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.175 [2024-05-15 03:20:10.210796] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf0970 00:25:39.175 [2024-05-15 03:20:10.210806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.175 [2024-05-15 03:20:10.212337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.175 [2024-05-15 03:20:10.212362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:39.175 pt2 00:25:39.175 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:39.175 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:39.175 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:39.433 [2024-05-15 03:20:10.463417] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:39.433 [2024-05-15 03:20:10.464895] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:39.433 [2024-05-15 03:20:10.465050] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xbfc0b0 00:25:39.433 [2024-05-15 03:20:10.465063] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:39.433 [2024-05-15 03:20:10.465132] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbfbee0 00:25:39.433 [2024-05-15 03:20:10.465224] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbfc0b0 00:25:39.433 [2024-05-15 03:20:10.465232] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbfc0b0 00:25:39.433 [2024-05-15 03:20:10.465292] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.433 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.691 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:39.691 "name": "raid_bdev1", 00:25:39.691 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:39.691 "strip_size_kb": 0, 00:25:39.691 "state": "online", 00:25:39.691 "raid_level": "raid1", 00:25:39.691 "superblock": true, 00:25:39.691 "num_base_bdevs": 2, 00:25:39.691 "num_base_bdevs_discovered": 2, 00:25:39.691 "num_base_bdevs_operational": 2, 00:25:39.691 "base_bdevs_list": [ 00:25:39.691 { 00:25:39.691 "name": "pt1", 00:25:39.691 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:39.691 "is_configured": true, 00:25:39.691 "data_offset": 256, 00:25:39.691 "data_size": 7936 00:25:39.691 }, 00:25:39.691 { 00:25:39.691 "name": "pt2", 00:25:39.691 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:39.691 "is_configured": true, 00:25:39.691 "data_offset": 256, 00:25:39.691 "data_size": 7936 00:25:39.691 } 00:25:39.691 ] 00:25:39.691 }' 00:25:39.691 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:39.691 03:20:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:40.255 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.512 [2024-05-15 03:20:11.606685] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.512 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:40.512 "name": "raid_bdev1", 00:25:40.512 "aliases": [ 00:25:40.512 "7878ad66-c1af-4813-a49c-e6ac60663f2d" 00:25:40.512 ], 00:25:40.512 "product_name": "Raid Volume", 00:25:40.512 "block_size": 4128, 00:25:40.512 "num_blocks": 7936, 00:25:40.512 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:40.512 "md_size": 32, 00:25:40.512 "md_interleave": true, 00:25:40.512 "dif_type": 0, 00:25:40.512 "assigned_rate_limits": { 00:25:40.512 "rw_ios_per_sec": 0, 00:25:40.512 "rw_mbytes_per_sec": 0, 00:25:40.512 "r_mbytes_per_sec": 0, 00:25:40.512 "w_mbytes_per_sec": 0 00:25:40.512 }, 00:25:40.512 "claimed": false, 00:25:40.512 "zoned": false, 00:25:40.512 "supported_io_types": { 00:25:40.512 "read": true, 00:25:40.512 "write": true, 00:25:40.512 "unmap": false, 00:25:40.512 "write_zeroes": true, 00:25:40.512 "flush": false, 00:25:40.512 "reset": true, 00:25:40.512 "compare": false, 00:25:40.512 "compare_and_write": false, 00:25:40.512 "abort": false, 00:25:40.512 "nvme_admin": false, 00:25:40.512 "nvme_io": false 00:25:40.512 }, 00:25:40.513 "memory_domains": [ 00:25:40.513 { 00:25:40.513 "dma_device_id": "system", 00:25:40.513 "dma_device_type": 1 00:25:40.513 }, 00:25:40.513 { 00:25:40.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.513 "dma_device_type": 2 00:25:40.513 }, 00:25:40.513 { 00:25:40.513 "dma_device_id": "system", 00:25:40.513 "dma_device_type": 1 00:25:40.513 }, 00:25:40.513 { 00:25:40.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.513 "dma_device_type": 2 00:25:40.513 } 00:25:40.513 ], 00:25:40.513 "driver_specific": { 00:25:40.513 "raid": { 00:25:40.513 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:40.513 "strip_size_kb": 0, 00:25:40.513 "state": "online", 00:25:40.513 "raid_level": "raid1", 00:25:40.513 "superblock": true, 00:25:40.513 "num_base_bdevs": 2, 00:25:40.513 "num_base_bdevs_discovered": 2, 00:25:40.513 "num_base_bdevs_operational": 2, 00:25:40.513 "base_bdevs_list": [ 00:25:40.513 { 00:25:40.513 "name": "pt1", 00:25:40.513 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:40.513 "is_configured": true, 00:25:40.513 "data_offset": 256, 00:25:40.513 "data_size": 7936 00:25:40.513 }, 00:25:40.513 { 00:25:40.513 "name": "pt2", 00:25:40.513 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:40.513 "is_configured": true, 00:25:40.513 "data_offset": 256, 00:25:40.513 "data_size": 7936 00:25:40.513 } 00:25:40.513 ] 00:25:40.513 } 00:25:40.513 } 00:25:40.513 }' 00:25:40.513 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:40.771 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:40.771 pt2' 00:25:40.771 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:40.771 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:40.771 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:40.771 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:40.771 "name": "pt1", 00:25:40.771 "aliases": [ 00:25:40.771 "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec" 00:25:40.771 ], 00:25:40.771 "product_name": "passthru", 00:25:40.771 "block_size": 4128, 00:25:40.771 "num_blocks": 8192, 00:25:40.771 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:40.771 "md_size": 32, 00:25:40.771 "md_interleave": true, 00:25:40.771 "dif_type": 0, 00:25:40.771 "assigned_rate_limits": { 00:25:40.771 "rw_ios_per_sec": 0, 00:25:40.771 "rw_mbytes_per_sec": 0, 00:25:40.771 "r_mbytes_per_sec": 0, 00:25:40.771 "w_mbytes_per_sec": 0 00:25:40.771 }, 00:25:40.771 "claimed": true, 00:25:40.771 "claim_type": "exclusive_write", 00:25:40.771 "zoned": false, 00:25:40.771 "supported_io_types": { 00:25:40.771 "read": true, 00:25:40.771 "write": true, 00:25:40.771 "unmap": true, 00:25:40.771 "write_zeroes": true, 00:25:40.771 "flush": true, 00:25:40.771 "reset": true, 00:25:40.771 "compare": false, 00:25:40.771 "compare_and_write": false, 00:25:40.771 "abort": true, 00:25:40.771 "nvme_admin": false, 00:25:40.771 "nvme_io": false 00:25:40.771 }, 00:25:40.771 "memory_domains": [ 00:25:40.771 { 00:25:40.771 "dma_device_id": "system", 00:25:40.771 "dma_device_type": 1 00:25:40.771 }, 00:25:40.771 { 00:25:40.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.771 "dma_device_type": 2 00:25:40.771 } 00:25:40.771 ], 00:25:40.771 "driver_specific": { 00:25:40.771 "passthru": { 00:25:40.771 "name": "pt1", 00:25:40.771 "base_bdev_name": "malloc1" 00:25:40.771 } 00:25:40.771 } 00:25:40.771 }' 00:25:41.039 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.039 03:20:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.039 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:41.336 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:41.595 "name": "pt2", 00:25:41.595 "aliases": [ 00:25:41.595 "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38" 00:25:41.595 ], 00:25:41.595 "product_name": "passthru", 00:25:41.595 "block_size": 4128, 00:25:41.595 "num_blocks": 8192, 00:25:41.595 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:41.595 "md_size": 32, 00:25:41.595 "md_interleave": true, 00:25:41.595 "dif_type": 0, 00:25:41.595 "assigned_rate_limits": { 00:25:41.595 "rw_ios_per_sec": 0, 00:25:41.595 "rw_mbytes_per_sec": 0, 00:25:41.595 "r_mbytes_per_sec": 0, 00:25:41.595 "w_mbytes_per_sec": 0 00:25:41.595 }, 00:25:41.595 "claimed": true, 00:25:41.595 "claim_type": "exclusive_write", 00:25:41.595 "zoned": false, 00:25:41.595 "supported_io_types": { 00:25:41.595 "read": true, 00:25:41.595 "write": true, 00:25:41.595 "unmap": true, 00:25:41.595 "write_zeroes": true, 00:25:41.595 "flush": true, 00:25:41.595 "reset": true, 00:25:41.595 "compare": false, 00:25:41.595 "compare_and_write": false, 00:25:41.595 "abort": true, 00:25:41.595 "nvme_admin": false, 00:25:41.595 "nvme_io": false 00:25:41.595 }, 00:25:41.595 "memory_domains": [ 00:25:41.595 { 00:25:41.595 "dma_device_id": "system", 00:25:41.595 "dma_device_type": 1 00:25:41.595 }, 00:25:41.595 { 00:25:41.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.595 "dma_device_type": 2 00:25:41.595 } 00:25:41.595 ], 00:25:41.595 "driver_specific": { 00:25:41.595 "passthru": { 00:25:41.595 "name": "pt2", 00:25:41.595 "base_bdev_name": "malloc2" 00:25:41.595 } 00:25:41.595 } 00:25:41.595 }' 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:41.595 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:41.854 03:20:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:42.112 [2024-05-15 03:20:13.138789] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.112 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7878ad66-c1af-4813-a49c-e6ac60663f2d 00:25:42.112 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7878ad66-c1af-4813-a49c-e6ac60663f2d ']' 00:25:42.112 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:42.370 [2024-05-15 03:20:13.395231] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.370 [2024-05-15 03:20:13.395253] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.370 [2024-05-15 03:20:13.395305] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.370 [2024-05-15 03:20:13.395356] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.370 [2024-05-15 03:20:13.395364] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbfc0b0 name raid_bdev1, state offline 00:25:42.370 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.370 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:42.628 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:42.628 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:42.628 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:42.628 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:42.885 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:42.885 03:20:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:43.145 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:43.145 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:43.403 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:43.661 [2024-05-15 03:20:14.654533] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:43.661 [2024-05-15 03:20:14.655962] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:43.661 [2024-05-15 03:20:14.656018] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:43.661 [2024-05-15 03:20:14.656068] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:43.661 [2024-05-15 03:20:14.656084] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.661 [2024-05-15 03:20:14.656092] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbfc600 name raid_bdev1, state configuring 00:25:43.661 request: 00:25:43.661 { 00:25:43.661 "name": "raid_bdev1", 00:25:43.661 "raid_level": "raid1", 00:25:43.661 "base_bdevs": [ 00:25:43.661 "malloc1", 00:25:43.661 "malloc2" 00:25:43.661 ], 00:25:43.661 "superblock": false, 00:25:43.661 "method": "bdev_raid_create", 00:25:43.661 "req_id": 1 00:25:43.661 } 00:25:43.661 Got JSON-RPC error response 00:25:43.661 response: 00:25:43.661 { 00:25:43.661 "code": -17, 00:25:43.661 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:43.661 } 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.661 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:43.919 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:43.919 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:43.919 03:20:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:44.177 [2024-05-15 03:20:15.163841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:44.177 [2024-05-15 03:20:15.163902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.177 [2024-05-15 03:20:15.163922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf26f0 00:25:44.177 [2024-05-15 03:20:15.163932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.177 [2024-05-15 03:20:15.165428] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.177 [2024-05-15 03:20:15.165454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:44.177 [2024-05-15 03:20:15.165498] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:44.177 [2024-05-15 03:20:15.165522] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:44.177 pt1 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.177 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.434 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.434 "name": "raid_bdev1", 00:25:44.434 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:44.434 "strip_size_kb": 0, 00:25:44.434 "state": "configuring", 00:25:44.434 "raid_level": "raid1", 00:25:44.434 "superblock": true, 00:25:44.434 "num_base_bdevs": 2, 00:25:44.434 "num_base_bdevs_discovered": 1, 00:25:44.434 "num_base_bdevs_operational": 2, 00:25:44.434 "base_bdevs_list": [ 00:25:44.434 { 00:25:44.434 "name": "pt1", 00:25:44.434 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:44.434 "is_configured": true, 00:25:44.434 "data_offset": 256, 00:25:44.434 "data_size": 7936 00:25:44.434 }, 00:25:44.434 { 00:25:44.435 "name": null, 00:25:44.435 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:44.435 "is_configured": false, 00:25:44.435 "data_offset": 256, 00:25:44.435 "data_size": 7936 00:25:44.435 } 00:25:44.435 ] 00:25:44.435 }' 00:25:44.435 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.435 03:20:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.998 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:44.998 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:44.998 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:44.998 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:45.256 [2024-05-15 03:20:16.294884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:45.256 [2024-05-15 03:20:16.294931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.256 [2024-05-15 03:20:16.294950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf2160 00:25:45.256 [2024-05-15 03:20:16.294959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.256 [2024-05-15 03:20:16.295128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.256 [2024-05-15 03:20:16.295141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:45.256 [2024-05-15 03:20:16.295181] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:45.256 [2024-05-15 03:20:16.295197] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:45.256 [2024-05-15 03:20:16.295280] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xa67580 00:25:45.256 [2024-05-15 03:20:16.295288] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:45.256 [2024-05-15 03:20:16.295343] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbf2020 00:25:45.256 [2024-05-15 03:20:16.295420] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa67580 00:25:45.256 [2024-05-15 03:20:16.295427] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa67580 00:25:45.256 [2024-05-15 03:20:16.295484] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.256 pt2 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.256 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.513 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.513 "name": "raid_bdev1", 00:25:45.513 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:45.513 "strip_size_kb": 0, 00:25:45.513 "state": "online", 00:25:45.513 "raid_level": "raid1", 00:25:45.513 "superblock": true, 00:25:45.513 "num_base_bdevs": 2, 00:25:45.513 "num_base_bdevs_discovered": 2, 00:25:45.513 "num_base_bdevs_operational": 2, 00:25:45.513 "base_bdevs_list": [ 00:25:45.513 { 00:25:45.513 "name": "pt1", 00:25:45.513 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:45.513 "is_configured": true, 00:25:45.513 "data_offset": 256, 00:25:45.513 "data_size": 7936 00:25:45.513 }, 00:25:45.513 { 00:25:45.513 "name": "pt2", 00:25:45.513 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:45.513 "is_configured": true, 00:25:45.513 "data_offset": 256, 00:25:45.513 "data_size": 7936 00:25:45.513 } 00:25:45.513 ] 00:25:45.513 }' 00:25:45.513 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.513 03:20:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:46.078 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:46.336 [2024-05-15 03:20:17.430153] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:46.336 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:46.336 "name": "raid_bdev1", 00:25:46.336 "aliases": [ 00:25:46.336 "7878ad66-c1af-4813-a49c-e6ac60663f2d" 00:25:46.336 ], 00:25:46.336 "product_name": "Raid Volume", 00:25:46.336 "block_size": 4128, 00:25:46.336 "num_blocks": 7936, 00:25:46.336 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:46.337 "md_size": 32, 00:25:46.337 "md_interleave": true, 00:25:46.337 "dif_type": 0, 00:25:46.337 "assigned_rate_limits": { 00:25:46.337 "rw_ios_per_sec": 0, 00:25:46.337 "rw_mbytes_per_sec": 0, 00:25:46.337 "r_mbytes_per_sec": 0, 00:25:46.337 "w_mbytes_per_sec": 0 00:25:46.337 }, 00:25:46.337 "claimed": false, 00:25:46.337 "zoned": false, 00:25:46.337 "supported_io_types": { 00:25:46.337 "read": true, 00:25:46.337 "write": true, 00:25:46.337 "unmap": false, 00:25:46.337 "write_zeroes": true, 00:25:46.337 "flush": false, 00:25:46.337 "reset": true, 00:25:46.337 "compare": false, 00:25:46.337 "compare_and_write": false, 00:25:46.337 "abort": false, 00:25:46.337 "nvme_admin": false, 00:25:46.337 "nvme_io": false 00:25:46.337 }, 00:25:46.337 "memory_domains": [ 00:25:46.337 { 00:25:46.337 "dma_device_id": "system", 00:25:46.337 "dma_device_type": 1 00:25:46.337 }, 00:25:46.337 { 00:25:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.337 "dma_device_type": 2 00:25:46.337 }, 00:25:46.337 { 00:25:46.337 "dma_device_id": "system", 00:25:46.337 "dma_device_type": 1 00:25:46.337 }, 00:25:46.337 { 00:25:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.337 "dma_device_type": 2 00:25:46.337 } 00:25:46.337 ], 00:25:46.337 "driver_specific": { 00:25:46.337 "raid": { 00:25:46.337 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:46.337 "strip_size_kb": 0, 00:25:46.337 "state": "online", 00:25:46.337 "raid_level": "raid1", 00:25:46.337 "superblock": true, 00:25:46.337 "num_base_bdevs": 2, 00:25:46.337 "num_base_bdevs_discovered": 2, 00:25:46.337 "num_base_bdevs_operational": 2, 00:25:46.337 "base_bdevs_list": [ 00:25:46.337 { 00:25:46.337 "name": "pt1", 00:25:46.337 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:46.337 "is_configured": true, 00:25:46.337 "data_offset": 256, 00:25:46.337 "data_size": 7936 00:25:46.337 }, 00:25:46.337 { 00:25:46.337 "name": "pt2", 00:25:46.337 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:46.337 "is_configured": true, 00:25:46.337 "data_offset": 256, 00:25:46.337 "data_size": 7936 00:25:46.337 } 00:25:46.337 ] 00:25:46.337 } 00:25:46.337 } 00:25:46.337 }' 00:25:46.337 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:46.596 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:46.596 pt2' 00:25:46.596 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:46.596 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:46.596 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:46.596 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:46.596 "name": "pt1", 00:25:46.596 "aliases": [ 00:25:46.596 "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec" 00:25:46.596 ], 00:25:46.596 "product_name": "passthru", 00:25:46.596 "block_size": 4128, 00:25:46.596 "num_blocks": 8192, 00:25:46.596 "uuid": "97365c47-25de-5e6d-ac4f-a1b26d0cf1ec", 00:25:46.596 "md_size": 32, 00:25:46.596 "md_interleave": true, 00:25:46.596 "dif_type": 0, 00:25:46.596 "assigned_rate_limits": { 00:25:46.596 "rw_ios_per_sec": 0, 00:25:46.596 "rw_mbytes_per_sec": 0, 00:25:46.596 "r_mbytes_per_sec": 0, 00:25:46.596 "w_mbytes_per_sec": 0 00:25:46.596 }, 00:25:46.596 "claimed": true, 00:25:46.596 "claim_type": "exclusive_write", 00:25:46.596 "zoned": false, 00:25:46.596 "supported_io_types": { 00:25:46.596 "read": true, 00:25:46.596 "write": true, 00:25:46.596 "unmap": true, 00:25:46.596 "write_zeroes": true, 00:25:46.596 "flush": true, 00:25:46.596 "reset": true, 00:25:46.596 "compare": false, 00:25:46.596 "compare_and_write": false, 00:25:46.596 "abort": true, 00:25:46.596 "nvme_admin": false, 00:25:46.596 "nvme_io": false 00:25:46.596 }, 00:25:46.596 "memory_domains": [ 00:25:46.596 { 00:25:46.596 "dma_device_id": "system", 00:25:46.596 "dma_device_type": 1 00:25:46.596 }, 00:25:46.596 { 00:25:46.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.596 "dma_device_type": 2 00:25:46.596 } 00:25:46.596 ], 00:25:46.596 "driver_specific": { 00:25:46.596 "passthru": { 00:25:46.596 "name": "pt1", 00:25:46.596 "base_bdev_name": "malloc1" 00:25:46.596 } 00:25:46.596 } 00:25:46.596 }' 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:46.855 03:20:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:47.113 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:47.371 "name": "pt2", 00:25:47.371 "aliases": [ 00:25:47.371 "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38" 00:25:47.371 ], 00:25:47.371 "product_name": "passthru", 00:25:47.371 "block_size": 4128, 00:25:47.371 "num_blocks": 8192, 00:25:47.371 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:47.371 "md_size": 32, 00:25:47.371 "md_interleave": true, 00:25:47.371 "dif_type": 0, 00:25:47.371 "assigned_rate_limits": { 00:25:47.371 "rw_ios_per_sec": 0, 00:25:47.371 "rw_mbytes_per_sec": 0, 00:25:47.371 "r_mbytes_per_sec": 0, 00:25:47.371 "w_mbytes_per_sec": 0 00:25:47.371 }, 00:25:47.371 "claimed": true, 00:25:47.371 "claim_type": "exclusive_write", 00:25:47.371 "zoned": false, 00:25:47.371 "supported_io_types": { 00:25:47.371 "read": true, 00:25:47.371 "write": true, 00:25:47.371 "unmap": true, 00:25:47.371 "write_zeroes": true, 00:25:47.371 "flush": true, 00:25:47.371 "reset": true, 00:25:47.371 "compare": false, 00:25:47.371 "compare_and_write": false, 00:25:47.371 "abort": true, 00:25:47.371 "nvme_admin": false, 00:25:47.371 "nvme_io": false 00:25:47.371 }, 00:25:47.371 "memory_domains": [ 00:25:47.371 { 00:25:47.371 "dma_device_id": "system", 00:25:47.371 "dma_device_type": 1 00:25:47.371 }, 00:25:47.371 { 00:25:47.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.371 "dma_device_type": 2 00:25:47.371 } 00:25:47.371 ], 00:25:47.371 "driver_specific": { 00:25:47.371 "passthru": { 00:25:47.371 "name": "pt2", 00:25:47.371 "base_bdev_name": "malloc2" 00:25:47.371 } 00:25:47.371 } 00:25:47.371 }' 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:47.371 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:47.629 03:20:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:47.887 [2024-05-15 03:20:18.986320] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.887 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7878ad66-c1af-4813-a49c-e6ac60663f2d '!=' 7878ad66-c1af-4813-a49c-e6ac60663f2d ']' 00:25:47.887 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:47.887 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:47.887 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:25:47.887 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:48.146 [2024-05-15 03:20:19.238771] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.146 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.405 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:48.405 "name": "raid_bdev1", 00:25:48.405 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:48.405 "strip_size_kb": 0, 00:25:48.405 "state": "online", 00:25:48.405 "raid_level": "raid1", 00:25:48.405 "superblock": true, 00:25:48.405 "num_base_bdevs": 2, 00:25:48.405 "num_base_bdevs_discovered": 1, 00:25:48.405 "num_base_bdevs_operational": 1, 00:25:48.405 "base_bdevs_list": [ 00:25:48.405 { 00:25:48.405 "name": null, 00:25:48.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.405 "is_configured": false, 00:25:48.405 "data_offset": 256, 00:25:48.405 "data_size": 7936 00:25:48.405 }, 00:25:48.405 { 00:25:48.405 "name": "pt2", 00:25:48.405 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:48.405 "is_configured": true, 00:25:48.405 "data_offset": 256, 00:25:48.405 "data_size": 7936 00:25:48.405 } 00:25:48.405 ] 00:25:48.405 }' 00:25:48.405 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:48.405 03:20:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.971 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:49.229 [2024-05-15 03:20:20.353740] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:49.229 [2024-05-15 03:20:20.353766] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.229 [2024-05-15 03:20:20.353819] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.229 [2024-05-15 03:20:20.353869] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.229 [2024-05-15 03:20:20.353878] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa67580 name raid_bdev1, state offline 00:25:49.229 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:49.229 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.487 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:49.487 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:49.487 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:49.487 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:49.487 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:49.746 03:20:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:50.004 [2024-05-15 03:20:21.119756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:50.004 [2024-05-15 03:20:21.119799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.004 [2024-05-15 03:20:21.119818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf10a0 00:25:50.004 [2024-05-15 03:20:21.119828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.004 [2024-05-15 03:20:21.121316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.004 [2024-05-15 03:20:21.121341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:50.004 [2024-05-15 03:20:21.121382] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:50.004 [2024-05-15 03:20:21.121405] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:50.004 [2024-05-15 03:20:21.121473] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0xa67b40 00:25:50.004 [2024-05-15 03:20:21.121481] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:50.004 [2024-05-15 03:20:21.121539] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa6fea0 00:25:50.004 [2024-05-15 03:20:21.121615] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa67b40 00:25:50.004 [2024-05-15 03:20:21.121623] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa67b40 00:25:50.004 [2024-05-15 03:20:21.121677] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.004 pt2 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.004 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.262 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.262 "name": "raid_bdev1", 00:25:50.262 "uuid": "7878ad66-c1af-4813-a49c-e6ac60663f2d", 00:25:50.262 "strip_size_kb": 0, 00:25:50.262 "state": "online", 00:25:50.262 "raid_level": "raid1", 00:25:50.262 "superblock": true, 00:25:50.262 "num_base_bdevs": 2, 00:25:50.262 "num_base_bdevs_discovered": 1, 00:25:50.262 "num_base_bdevs_operational": 1, 00:25:50.262 "base_bdevs_list": [ 00:25:50.262 { 00:25:50.262 "name": null, 00:25:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.262 "is_configured": false, 00:25:50.262 "data_offset": 256, 00:25:50.262 "data_size": 7936 00:25:50.262 }, 00:25:50.262 { 00:25:50.262 "name": "pt2", 00:25:50.262 "uuid": "3b2bd3b0-b4ca-5350-bba9-5fdb72971e38", 00:25:50.262 "is_configured": true, 00:25:50.262 "data_offset": 256, 00:25:50.262 "data_size": 7936 00:25:50.262 } 00:25:50.262 ] 00:25:50.262 }' 00:25:50.262 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.262 03:20:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:25:51.196 [2024-05-15 03:20:22.263039] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # '[' 7878ad66-c1af-4813-a49c-e6ac60663f2d '!=' 7878ad66-c1af-4813-a49c-e6ac60663f2d ']' 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@568 -- # killprocess 20810 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 20810 ']' 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 20810 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 20810 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 20810' 00:25:51.196 killing process with pid 20810 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 20810 00:25:51.196 [2024-05-15 03:20:22.330424] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:51.196 [2024-05-15 03:20:22.330487] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.196 [2024-05-15 03:20:22.330530] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.196 [2024-05-15 03:20:22.330539] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa67b40 name raid_bdev1, state offline 00:25:51.196 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 20810 00:25:51.196 [2024-05-15 03:20:22.347221] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:51.455 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # return 0 00:25:51.455 00:25:51.455 real 0m14.364s 00:25:51.455 user 0m26.454s 00:25:51.455 sys 0m2.107s 00:25:51.455 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:51.455 03:20:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.455 ************************************ 00:25:51.455 END TEST raid_superblock_test_md_interleaved 00:25:51.455 ************************************ 00:25:51.455 03:20:22 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:51.455 03:20:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:25:51.455 03:20:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:51.455 03:20:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:51.714 ************************************ 00:25:51.714 START TEST raid_rebuild_test_sb_md_interleaved 00:25:51.714 ************************************ 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local verify=false 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # local strip_size 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@582 -- # local create_arg 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local data_offset 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # raid_pid=23369 00:25:51.714 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # waitforlisten 23369 /var/tmp/spdk-raid.sock 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 23369 ']' 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:51.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.715 03:20:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 [2024-05-15 03:20:22.713193] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:25:51.715 [2024-05-15 03:20:22.713246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23369 ] 00:25:51.715 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:51.715 Zero copy mechanism will not be used. 00:25:51.715 [2024-05-15 03:20:22.810382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.973 [2024-05-15 03:20:22.906190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.973 [2024-05-15 03:20:22.964797] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:51.973 [2024-05-15 03:20:22.964843] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.540 03:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.540 03:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:25:52.540 03:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:25:52.540 03:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:52.799 BaseBdev1_malloc 00:25:52.799 03:20:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:53.057 [2024-05-15 03:20:24.158767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:53.057 [2024-05-15 03:20:24.158809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.057 [2024-05-15 03:20:24.158829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5a9e0 00:25:53.057 [2024-05-15 03:20:24.158838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.057 [2024-05-15 03:20:24.160361] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.057 [2024-05-15 03:20:24.160386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:53.057 BaseBdev1 00:25:53.057 03:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:25:53.057 03:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:53.315 BaseBdev2_malloc 00:25:53.315 03:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:53.573 [2024-05-15 03:20:24.677079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:53.573 [2024-05-15 03:20:24.677122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.573 [2024-05-15 03:20:24.677142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3fa70 00:25:53.573 [2024-05-15 03:20:24.677151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.573 [2024-05-15 03:20:24.678615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.573 [2024-05-15 03:20:24.678639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:53.573 BaseBdev2 00:25:53.573 03:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:53.831 spare_malloc 00:25:53.831 03:20:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:54.089 spare_delay 00:25:54.089 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@614 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:54.347 [2024-05-15 03:20:25.432042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:54.347 [2024-05-15 03:20:25.432084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.347 [2024-05-15 03:20:25.432102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4ae40 00:25:54.347 [2024-05-15 03:20:25.432111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.347 [2024-05-15 03:20:25.433524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.347 [2024-05-15 03:20:25.433554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:54.347 spare 00:25:54.347 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:54.606 [2024-05-15 03:20:25.672698] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.606 [2024-05-15 03:20:25.673979] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:54.606 [2024-05-15 03:20:25.674141] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db73b0 00:25:54.606 [2024-05-15 03:20:25.674154] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:54.606 [2024-05-15 03:20:25.674224] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db72c0 00:25:54.606 [2024-05-15 03:20:25.674310] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db73b0 00:25:54.606 [2024-05-15 03:20:25.674317] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db73b0 00:25:54.606 [2024-05-15 03:20:25.674373] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.606 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.864 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:54.864 "name": "raid_bdev1", 00:25:54.864 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:25:54.864 "strip_size_kb": 0, 00:25:54.864 "state": "online", 00:25:54.864 "raid_level": "raid1", 00:25:54.864 "superblock": true, 00:25:54.864 "num_base_bdevs": 2, 00:25:54.864 "num_base_bdevs_discovered": 2, 00:25:54.864 "num_base_bdevs_operational": 2, 00:25:54.864 "base_bdevs_list": [ 00:25:54.864 { 00:25:54.864 "name": "BaseBdev1", 00:25:54.864 "uuid": "94d1813f-dfd7-5fdd-9c06-af1ebd0aecc2", 00:25:54.864 "is_configured": true, 00:25:54.864 "data_offset": 256, 00:25:54.864 "data_size": 7936 00:25:54.864 }, 00:25:54.864 { 00:25:54.864 "name": "BaseBdev2", 00:25:54.864 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:25:54.864 "is_configured": true, 00:25:54.864 "data_offset": 256, 00:25:54.864 "data_size": 7936 00:25:54.864 } 00:25:54.864 ] 00:25:54.864 }' 00:25:54.864 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:54.864 03:20:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.430 03:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:55.430 03:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:25:55.689 [2024-05-15 03:20:26.807967] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:55.689 03:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:25:55.689 03:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:55.689 03:20:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.947 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:25:55.947 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:25:55.947 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@629 -- # '[' false = true ']' 00:25:55.947 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:56.206 [2024-05-15 03:20:27.313307] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.206 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.464 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.464 "name": "raid_bdev1", 00:25:56.464 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:25:56.464 "strip_size_kb": 0, 00:25:56.464 "state": "online", 00:25:56.464 "raid_level": "raid1", 00:25:56.464 "superblock": true, 00:25:56.464 "num_base_bdevs": 2, 00:25:56.465 "num_base_bdevs_discovered": 1, 00:25:56.465 "num_base_bdevs_operational": 1, 00:25:56.465 "base_bdevs_list": [ 00:25:56.465 { 00:25:56.465 "name": null, 00:25:56.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.465 "is_configured": false, 00:25:56.465 "data_offset": 256, 00:25:56.465 "data_size": 7936 00:25:56.465 }, 00:25:56.465 { 00:25:56.465 "name": "BaseBdev2", 00:25:56.465 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:25:56.465 "is_configured": true, 00:25:56.465 "data_offset": 256, 00:25:56.465 "data_size": 7936 00:25:56.465 } 00:25:56.465 ] 00:25:56.465 }' 00:25:56.465 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.465 03:20:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:57.420 03:20:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:57.420 [2024-05-15 03:20:28.352098] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:57.420 [2024-05-15 03:20:28.355588] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db7970 00:25:57.420 [2024-05-15 03:20:28.357652] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:57.420 03:20:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # sleep 1 00:25:58.372 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.372 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:58.372 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:58.372 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:58.373 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:58.373 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.373 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:58.631 "name": "raid_bdev1", 00:25:58.631 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:25:58.631 "strip_size_kb": 0, 00:25:58.631 "state": "online", 00:25:58.631 "raid_level": "raid1", 00:25:58.631 "superblock": true, 00:25:58.631 "num_base_bdevs": 2, 00:25:58.631 "num_base_bdevs_discovered": 2, 00:25:58.631 "num_base_bdevs_operational": 2, 00:25:58.631 "process": { 00:25:58.631 "type": "rebuild", 00:25:58.631 "target": "spare", 00:25:58.631 "progress": { 00:25:58.631 "blocks": 2816, 00:25:58.631 "percent": 35 00:25:58.631 } 00:25:58.631 }, 00:25:58.631 "base_bdevs_list": [ 00:25:58.631 { 00:25:58.631 "name": "spare", 00:25:58.631 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:25:58.631 "is_configured": true, 00:25:58.631 "data_offset": 256, 00:25:58.631 "data_size": 7936 00:25:58.631 }, 00:25:58.631 { 00:25:58.631 "name": "BaseBdev2", 00:25:58.631 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:25:58.631 "is_configured": true, 00:25:58.631 "data_offset": 256, 00:25:58.631 "data_size": 7936 00:25:58.631 } 00:25:58.631 ] 00:25:58.631 }' 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.631 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:58.890 [2024-05-15 03:20:29.886563] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:58.890 [2024-05-15 03:20:29.969955] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:58.890 [2024-05-15 03:20:29.970000] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.890 03:20:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.890 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.890 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.149 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.149 "name": "raid_bdev1", 00:25:59.149 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:25:59.149 "strip_size_kb": 0, 00:25:59.149 "state": "online", 00:25:59.149 "raid_level": "raid1", 00:25:59.149 "superblock": true, 00:25:59.149 "num_base_bdevs": 2, 00:25:59.149 "num_base_bdevs_discovered": 1, 00:25:59.149 "num_base_bdevs_operational": 1, 00:25:59.149 "base_bdevs_list": [ 00:25:59.149 { 00:25:59.149 "name": null, 00:25:59.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.149 "is_configured": false, 00:25:59.149 "data_offset": 256, 00:25:59.149 "data_size": 7936 00:25:59.149 }, 00:25:59.149 { 00:25:59.149 "name": "BaseBdev2", 00:25:59.149 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:25:59.149 "is_configured": true, 00:25:59.149 "data_offset": 256, 00:25:59.149 "data_size": 7936 00:25:59.149 } 00:25:59.149 ] 00:25:59.149 }' 00:25:59.149 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.149 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.084 03:20:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:00.084 "name": "raid_bdev1", 00:26:00.084 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:00.084 "strip_size_kb": 0, 00:26:00.084 "state": "online", 00:26:00.084 "raid_level": "raid1", 00:26:00.084 "superblock": true, 00:26:00.084 "num_base_bdevs": 2, 00:26:00.084 "num_base_bdevs_discovered": 1, 00:26:00.084 "num_base_bdevs_operational": 1, 00:26:00.084 "base_bdevs_list": [ 00:26:00.084 { 00:26:00.084 "name": null, 00:26:00.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.084 "is_configured": false, 00:26:00.084 "data_offset": 256, 00:26:00.084 "data_size": 7936 00:26:00.084 }, 00:26:00.084 { 00:26:00.084 "name": "BaseBdev2", 00:26:00.084 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:00.084 "is_configured": true, 00:26:00.084 "data_offset": 256, 00:26:00.084 "data_size": 7936 00:26:00.084 } 00:26:00.084 ] 00:26:00.084 }' 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:00.084 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@667 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:00.344 [2024-05-15 03:20:31.461808] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.344 [2024-05-15 03:20:31.465283] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dbdf20 00:26:00.344 [2024-05-15 03:20:31.466794] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.344 03:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # sleep 1 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.720 "name": "raid_bdev1", 00:26:01.720 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:01.720 "strip_size_kb": 0, 00:26:01.720 "state": "online", 00:26:01.720 "raid_level": "raid1", 00:26:01.720 "superblock": true, 00:26:01.720 "num_base_bdevs": 2, 00:26:01.720 "num_base_bdevs_discovered": 2, 00:26:01.720 "num_base_bdevs_operational": 2, 00:26:01.720 "process": { 00:26:01.720 "type": "rebuild", 00:26:01.720 "target": "spare", 00:26:01.720 "progress": { 00:26:01.720 "blocks": 3072, 00:26:01.720 "percent": 38 00:26:01.720 } 00:26:01.720 }, 00:26:01.720 "base_bdevs_list": [ 00:26:01.720 { 00:26:01.720 "name": "spare", 00:26:01.720 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:01.720 "is_configured": true, 00:26:01.720 "data_offset": 256, 00:26:01.720 "data_size": 7936 00:26:01.720 }, 00:26:01.720 { 00:26:01.720 "name": "BaseBdev2", 00:26:01.720 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:01.720 "is_configured": true, 00:26:01.720 "data_offset": 256, 00:26:01.720 "data_size": 7936 00:26:01.720 } 00:26:01.720 ] 00:26:01.720 }' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:26:01.720 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # local timeout=1009 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.720 03:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.978 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.978 "name": "raid_bdev1", 00:26:01.978 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:01.978 "strip_size_kb": 0, 00:26:01.978 "state": "online", 00:26:01.978 "raid_level": "raid1", 00:26:01.978 "superblock": true, 00:26:01.978 "num_base_bdevs": 2, 00:26:01.978 "num_base_bdevs_discovered": 2, 00:26:01.978 "num_base_bdevs_operational": 2, 00:26:01.978 "process": { 00:26:01.978 "type": "rebuild", 00:26:01.978 "target": "spare", 00:26:01.978 "progress": { 00:26:01.978 "blocks": 3840, 00:26:01.978 "percent": 48 00:26:01.978 } 00:26:01.978 }, 00:26:01.978 "base_bdevs_list": [ 00:26:01.978 { 00:26:01.978 "name": "spare", 00:26:01.978 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:01.978 "is_configured": true, 00:26:01.978 "data_offset": 256, 00:26:01.978 "data_size": 7936 00:26:01.978 }, 00:26:01.978 { 00:26:01.978 "name": "BaseBdev2", 00:26:01.978 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:01.978 "is_configured": true, 00:26:01.978 "data_offset": 256, 00:26:01.978 "data_size": 7936 00:26:01.978 } 00:26:01.978 ] 00:26:01.978 }' 00:26:01.978 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:02.236 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.236 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:02.236 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.236 03:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.177 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.436 "name": "raid_bdev1", 00:26:03.436 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:03.436 "strip_size_kb": 0, 00:26:03.436 "state": "online", 00:26:03.436 "raid_level": "raid1", 00:26:03.436 "superblock": true, 00:26:03.436 "num_base_bdevs": 2, 00:26:03.436 "num_base_bdevs_discovered": 2, 00:26:03.436 "num_base_bdevs_operational": 2, 00:26:03.436 "process": { 00:26:03.436 "type": "rebuild", 00:26:03.436 "target": "spare", 00:26:03.436 "progress": { 00:26:03.436 "blocks": 7424, 00:26:03.436 "percent": 93 00:26:03.436 } 00:26:03.436 }, 00:26:03.436 "base_bdevs_list": [ 00:26:03.436 { 00:26:03.436 "name": "spare", 00:26:03.436 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:03.436 "is_configured": true, 00:26:03.436 "data_offset": 256, 00:26:03.436 "data_size": 7936 00:26:03.436 }, 00:26:03.436 { 00:26:03.436 "name": "BaseBdev2", 00:26:03.436 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:03.436 "is_configured": true, 00:26:03.436 "data_offset": 256, 00:26:03.436 "data_size": 7936 00:26:03.436 } 00:26:03.436 ] 00:26:03.436 }' 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.436 03:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:26:03.436 [2024-05-15 03:20:34.590381] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:03.436 [2024-05-15 03:20:34.590447] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:03.436 [2024-05-15 03:20:34.590529] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:04.809 "name": "raid_bdev1", 00:26:04.809 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:04.809 "strip_size_kb": 0, 00:26:04.809 "state": "online", 00:26:04.809 "raid_level": "raid1", 00:26:04.809 "superblock": true, 00:26:04.809 "num_base_bdevs": 2, 00:26:04.809 "num_base_bdevs_discovered": 2, 00:26:04.809 "num_base_bdevs_operational": 2, 00:26:04.809 "base_bdevs_list": [ 00:26:04.809 { 00:26:04.809 "name": "spare", 00:26:04.809 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:04.809 "is_configured": true, 00:26:04.809 "data_offset": 256, 00:26:04.809 "data_size": 7936 00:26:04.809 }, 00:26:04.809 { 00:26:04.809 "name": "BaseBdev2", 00:26:04.809 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:04.809 "is_configured": true, 00:26:04.809 "data_offset": 256, 00:26:04.809 "data_size": 7936 00:26:04.809 } 00:26:04.809 ] 00:26:04.809 }' 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # break 00:26:04.809 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.810 03:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:05.068 "name": "raid_bdev1", 00:26:05.068 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:05.068 "strip_size_kb": 0, 00:26:05.068 "state": "online", 00:26:05.068 "raid_level": "raid1", 00:26:05.068 "superblock": true, 00:26:05.068 "num_base_bdevs": 2, 00:26:05.068 "num_base_bdevs_discovered": 2, 00:26:05.068 "num_base_bdevs_operational": 2, 00:26:05.068 "base_bdevs_list": [ 00:26:05.068 { 00:26:05.068 "name": "spare", 00:26:05.068 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:05.068 "is_configured": true, 00:26:05.068 "data_offset": 256, 00:26:05.068 "data_size": 7936 00:26:05.068 }, 00:26:05.068 { 00:26:05.068 "name": "BaseBdev2", 00:26:05.068 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:05.068 "is_configured": true, 00:26:05.068 "data_offset": 256, 00:26:05.068 "data_size": 7936 00:26:05.068 } 00:26:05.068 ] 00:26:05.068 }' 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:05.068 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.327 "name": "raid_bdev1", 00:26:05.327 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:05.327 "strip_size_kb": 0, 00:26:05.327 "state": "online", 00:26:05.327 "raid_level": "raid1", 00:26:05.327 "superblock": true, 00:26:05.327 "num_base_bdevs": 2, 00:26:05.327 "num_base_bdevs_discovered": 2, 00:26:05.327 "num_base_bdevs_operational": 2, 00:26:05.327 "base_bdevs_list": [ 00:26:05.327 { 00:26:05.327 "name": "spare", 00:26:05.327 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:05.327 "is_configured": true, 00:26:05.327 "data_offset": 256, 00:26:05.327 "data_size": 7936 00:26:05.327 }, 00:26:05.327 { 00:26:05.327 "name": "BaseBdev2", 00:26:05.327 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:05.327 "is_configured": true, 00:26:05.327 "data_offset": 256, 00:26:05.327 "data_size": 7936 00:26:05.327 } 00:26:05.327 ] 00:26:05.327 }' 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.327 03:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.262 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:06.262 [2024-05-15 03:20:37.326774] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:06.262 [2024-05-15 03:20:37.326799] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:06.262 [2024-05-15 03:20:37.326858] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:06.262 [2024-05-15 03:20:37.326913] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:06.262 [2024-05-15 03:20:37.326923] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db73b0 name raid_bdev1, state offline 00:26:06.262 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.262 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # jq length 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@727 -- # '[' false = true ']' 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:26:06.520 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:06.778 03:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:07.036 [2024-05-15 03:20:38.088759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:07.036 [2024-05-15 03:20:38.088799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.036 [2024-05-15 03:20:38.088817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db9f20 00:26:07.036 [2024-05-15 03:20:38.088826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.036 [2024-05-15 03:20:38.090584] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.036 [2024-05-15 03:20:38.090611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:07.036 [2024-05-15 03:20:38.090655] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:07.036 [2024-05-15 03:20:38.090679] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.036 BaseBdev1 00:26:07.036 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:26:07.036 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:26:07.036 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:07.295 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:07.553 [2024-05-15 03:20:38.606148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:07.553 [2024-05-15 03:20:38.606184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.553 [2024-05-15 03:20:38.606201] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db7650 00:26:07.553 [2024-05-15 03:20:38.606210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.553 [2024-05-15 03:20:38.606358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.553 [2024-05-15 03:20:38.606371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:07.553 [2024-05-15 03:20:38.606411] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:07.553 [2024-05-15 03:20:38.606419] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:07.553 [2024-05-15 03:20:38.606426] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.553 [2024-05-15 03:20:38.606438] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dba220 name raid_bdev1, state configuring 00:26:07.553 [2024-05-15 03:20:38.606465] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.553 BaseBdev2 00:26:07.553 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:07.811 03:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:08.070 [2024-05-15 03:20:39.103483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:08.070 [2024-05-15 03:20:39.103521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.070 [2024-05-15 03:20:39.103541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db8ef0 00:26:08.070 [2024-05-15 03:20:39.103556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.070 [2024-05-15 03:20:39.103724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.070 [2024-05-15 03:20:39.103738] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:08.070 [2024-05-15 03:20:39.103787] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:08.070 [2024-05-15 03:20:39.103803] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:08.070 spare 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.070 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.070 [2024-05-15 03:20:39.204135] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db8b80 00:26:08.070 [2024-05-15 03:20:39.204149] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:08.070 [2024-05-15 03:20:39.204215] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db98e0 00:26:08.070 [2024-05-15 03:20:39.204309] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db8b80 00:26:08.070 [2024-05-15 03:20:39.204317] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db8b80 00:26:08.070 [2024-05-15 03:20:39.204380] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.328 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.328 "name": "raid_bdev1", 00:26:08.328 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:08.328 "strip_size_kb": 0, 00:26:08.328 "state": "online", 00:26:08.328 "raid_level": "raid1", 00:26:08.328 "superblock": true, 00:26:08.328 "num_base_bdevs": 2, 00:26:08.328 "num_base_bdevs_discovered": 2, 00:26:08.328 "num_base_bdevs_operational": 2, 00:26:08.328 "base_bdevs_list": [ 00:26:08.328 { 00:26:08.328 "name": "spare", 00:26:08.328 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:08.328 "is_configured": true, 00:26:08.328 "data_offset": 256, 00:26:08.328 "data_size": 7936 00:26:08.328 }, 00:26:08.328 { 00:26:08.328 "name": "BaseBdev2", 00:26:08.328 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:08.328 "is_configured": true, 00:26:08.328 "data_offset": 256, 00:26:08.328 "data_size": 7936 00:26:08.328 } 00:26:08.328 ] 00:26:08.328 }' 00:26:08.328 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.328 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:08.895 03:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.895 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.153 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:09.153 "name": "raid_bdev1", 00:26:09.153 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:09.153 "strip_size_kb": 0, 00:26:09.153 "state": "online", 00:26:09.153 "raid_level": "raid1", 00:26:09.153 "superblock": true, 00:26:09.153 "num_base_bdevs": 2, 00:26:09.153 "num_base_bdevs_discovered": 2, 00:26:09.153 "num_base_bdevs_operational": 2, 00:26:09.153 "base_bdevs_list": [ 00:26:09.153 { 00:26:09.153 "name": "spare", 00:26:09.153 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:09.153 "is_configured": true, 00:26:09.153 "data_offset": 256, 00:26:09.153 "data_size": 7936 00:26:09.153 }, 00:26:09.153 { 00:26:09.153 "name": "BaseBdev2", 00:26:09.153 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:09.153 "is_configured": true, 00:26:09.153 "data_offset": 256, 00:26:09.153 "data_size": 7936 00:26:09.153 } 00:26:09.153 ] 00:26:09.153 }' 00:26:09.153 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:09.153 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:09.153 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:09.412 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:09.412 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.412 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:09.670 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:26:09.670 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:09.928 [2024-05-15 03:20:40.840259] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.928 03:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.186 03:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.186 "name": "raid_bdev1", 00:26:10.186 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:10.186 "strip_size_kb": 0, 00:26:10.186 "state": "online", 00:26:10.186 "raid_level": "raid1", 00:26:10.186 "superblock": true, 00:26:10.187 "num_base_bdevs": 2, 00:26:10.187 "num_base_bdevs_discovered": 1, 00:26:10.187 "num_base_bdevs_operational": 1, 00:26:10.187 "base_bdevs_list": [ 00:26:10.187 { 00:26:10.187 "name": null, 00:26:10.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.187 "is_configured": false, 00:26:10.187 "data_offset": 256, 00:26:10.187 "data_size": 7936 00:26:10.187 }, 00:26:10.187 { 00:26:10.187 "name": "BaseBdev2", 00:26:10.187 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:10.187 "is_configured": true, 00:26:10.187 "data_offset": 256, 00:26:10.187 "data_size": 7936 00:26:10.187 } 00:26:10.187 ] 00:26:10.187 }' 00:26:10.187 03:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.187 03:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.754 03:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:11.012 [2024-05-15 03:20:41.959283] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:11.012 [2024-05-15 03:20:41.959426] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:11.012 [2024-05-15 03:20:41.959440] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:11.012 [2024-05-15 03:20:41.959465] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:11.013 [2024-05-15 03:20:41.962880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db7210 00:26:11.013 [2024-05-15 03:20:41.964984] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:11.013 03:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # sleep 1 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.948 03:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:12.206 "name": "raid_bdev1", 00:26:12.206 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:12.206 "strip_size_kb": 0, 00:26:12.206 "state": "online", 00:26:12.206 "raid_level": "raid1", 00:26:12.206 "superblock": true, 00:26:12.206 "num_base_bdevs": 2, 00:26:12.206 "num_base_bdevs_discovered": 2, 00:26:12.206 "num_base_bdevs_operational": 2, 00:26:12.206 "process": { 00:26:12.206 "type": "rebuild", 00:26:12.206 "target": "spare", 00:26:12.206 "progress": { 00:26:12.206 "blocks": 3072, 00:26:12.206 "percent": 38 00:26:12.206 } 00:26:12.206 }, 00:26:12.206 "base_bdevs_list": [ 00:26:12.206 { 00:26:12.206 "name": "spare", 00:26:12.206 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:12.206 "is_configured": true, 00:26:12.206 "data_offset": 256, 00:26:12.206 "data_size": 7936 00:26:12.206 }, 00:26:12.206 { 00:26:12.206 "name": "BaseBdev2", 00:26:12.206 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:12.206 "is_configured": true, 00:26:12.206 "data_offset": 256, 00:26:12.206 "data_size": 7936 00:26:12.206 } 00:26:12.206 ] 00:26:12.206 }' 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:12.206 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:12.465 [2024-05-15 03:20:43.575976] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:12.465 [2024-05-15 03:20:43.577273] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:12.465 [2024-05-15 03:20:43.577310] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.465 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.724 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:12.724 "name": "raid_bdev1", 00:26:12.724 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:12.724 "strip_size_kb": 0, 00:26:12.724 "state": "online", 00:26:12.724 "raid_level": "raid1", 00:26:12.724 "superblock": true, 00:26:12.724 "num_base_bdevs": 2, 00:26:12.724 "num_base_bdevs_discovered": 1, 00:26:12.724 "num_base_bdevs_operational": 1, 00:26:12.724 "base_bdevs_list": [ 00:26:12.724 { 00:26:12.724 "name": null, 00:26:12.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.724 "is_configured": false, 00:26:12.724 "data_offset": 256, 00:26:12.724 "data_size": 7936 00:26:12.724 }, 00:26:12.724 { 00:26:12.724 "name": "BaseBdev2", 00:26:12.724 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:12.724 "is_configured": true, 00:26:12.724 "data_offset": 256, 00:26:12.724 "data_size": 7936 00:26:12.724 } 00:26:12.724 ] 00:26:12.724 }' 00:26:12.724 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:12.724 03:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:13.659 03:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:13.659 [2024-05-15 03:20:44.727927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:13.659 [2024-05-15 03:20:44.727971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.659 [2024-05-15 03:20:44.727989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db96a0 00:26:13.659 [2024-05-15 03:20:44.727999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.659 [2024-05-15 03:20:44.728173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.659 [2024-05-15 03:20:44.728186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:13.659 [2024-05-15 03:20:44.728237] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:13.659 [2024-05-15 03:20:44.728247] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:13.659 [2024-05-15 03:20:44.728255] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:13.659 [2024-05-15 03:20:44.728276] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:13.659 [2024-05-15 03:20:44.731687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f4d5a0 00:26:13.659 [2024-05-15 03:20:44.733207] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:13.659 spare 00:26:13.659 03:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.034 "name": "raid_bdev1", 00:26:15.034 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:15.034 "strip_size_kb": 0, 00:26:15.034 "state": "online", 00:26:15.034 "raid_level": "raid1", 00:26:15.034 "superblock": true, 00:26:15.034 "num_base_bdevs": 2, 00:26:15.034 "num_base_bdevs_discovered": 2, 00:26:15.034 "num_base_bdevs_operational": 2, 00:26:15.034 "process": { 00:26:15.034 "type": "rebuild", 00:26:15.034 "target": "spare", 00:26:15.034 "progress": { 00:26:15.034 "blocks": 2816, 00:26:15.034 "percent": 35 00:26:15.034 } 00:26:15.034 }, 00:26:15.034 "base_bdevs_list": [ 00:26:15.034 { 00:26:15.034 "name": "spare", 00:26:15.034 "uuid": "e79a7f51-d015-593e-a76d-fc5d5393c665", 00:26:15.034 "is_configured": true, 00:26:15.034 "data_offset": 256, 00:26:15.034 "data_size": 7936 00:26:15.034 }, 00:26:15.034 { 00:26:15.034 "name": "BaseBdev2", 00:26:15.034 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:15.034 "is_configured": true, 00:26:15.034 "data_offset": 256, 00:26:15.034 "data_size": 7936 00:26:15.034 } 00:26:15.034 ] 00:26:15.034 }' 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.034 03:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.034 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.034 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:15.034 [2024-05-15 03:20:46.169787] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:15.293 [2024-05-15 03:20:46.244656] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:15.293 [2024-05-15 03:20:46.244699] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.293 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.562 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.562 "name": "raid_bdev1", 00:26:15.562 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:15.562 "strip_size_kb": 0, 00:26:15.562 "state": "online", 00:26:15.562 "raid_level": "raid1", 00:26:15.562 "superblock": true, 00:26:15.562 "num_base_bdevs": 2, 00:26:15.562 "num_base_bdevs_discovered": 1, 00:26:15.562 "num_base_bdevs_operational": 1, 00:26:15.562 "base_bdevs_list": [ 00:26:15.562 { 00:26:15.563 "name": null, 00:26:15.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.563 "is_configured": false, 00:26:15.563 "data_offset": 256, 00:26:15.563 "data_size": 7936 00:26:15.563 }, 00:26:15.563 { 00:26:15.563 "name": "BaseBdev2", 00:26:15.563 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:15.563 "is_configured": true, 00:26:15.563 "data_offset": 256, 00:26:15.563 "data_size": 7936 00:26:15.563 } 00:26:15.563 ] 00:26:15.563 }' 00:26:15.563 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.563 03:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.159 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:16.418 "name": "raid_bdev1", 00:26:16.418 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:16.418 "strip_size_kb": 0, 00:26:16.418 "state": "online", 00:26:16.418 "raid_level": "raid1", 00:26:16.418 "superblock": true, 00:26:16.418 "num_base_bdevs": 2, 00:26:16.418 "num_base_bdevs_discovered": 1, 00:26:16.418 "num_base_bdevs_operational": 1, 00:26:16.418 "base_bdevs_list": [ 00:26:16.418 { 00:26:16.418 "name": null, 00:26:16.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.418 "is_configured": false, 00:26:16.418 "data_offset": 256, 00:26:16.418 "data_size": 7936 00:26:16.418 }, 00:26:16.418 { 00:26:16.418 "name": "BaseBdev2", 00:26:16.418 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:16.418 "is_configured": true, 00:26:16.418 "data_offset": 256, 00:26:16.418 "data_size": 7936 00:26:16.418 } 00:26:16.418 ] 00:26:16.418 }' 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:16.418 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:16.676 03:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:16.935 [2024-05-15 03:20:48.005088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:16.935 [2024-05-15 03:20:48.005141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.935 [2024-05-15 03:20:48.005159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4d240 00:26:16.935 [2024-05-15 03:20:48.005169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.935 [2024-05-15 03:20:48.005325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.935 [2024-05-15 03:20:48.005338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:16.935 [2024-05-15 03:20:48.005381] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:16.935 [2024-05-15 03:20:48.005390] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:16.935 [2024-05-15 03:20:48.005397] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:16.935 BaseBdev1 00:26:16.935 03:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # sleep 1 00:26:17.871 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:17.871 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:17.871 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.129 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.387 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:18.387 "name": "raid_bdev1", 00:26:18.387 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:18.387 "strip_size_kb": 0, 00:26:18.387 "state": "online", 00:26:18.387 "raid_level": "raid1", 00:26:18.387 "superblock": true, 00:26:18.387 "num_base_bdevs": 2, 00:26:18.387 "num_base_bdevs_discovered": 1, 00:26:18.387 "num_base_bdevs_operational": 1, 00:26:18.387 "base_bdevs_list": [ 00:26:18.387 { 00:26:18.387 "name": null, 00:26:18.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.387 "is_configured": false, 00:26:18.387 "data_offset": 256, 00:26:18.387 "data_size": 7936 00:26:18.387 }, 00:26:18.387 { 00:26:18.387 "name": "BaseBdev2", 00:26:18.387 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:18.387 "is_configured": true, 00:26:18.387 "data_offset": 256, 00:26:18.387 "data_size": 7936 00:26:18.387 } 00:26:18.387 ] 00:26:18.387 }' 00:26:18.387 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:18.387 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.953 03:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:19.211 "name": "raid_bdev1", 00:26:19.211 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:19.211 "strip_size_kb": 0, 00:26:19.211 "state": "online", 00:26:19.211 "raid_level": "raid1", 00:26:19.211 "superblock": true, 00:26:19.211 "num_base_bdevs": 2, 00:26:19.211 "num_base_bdevs_discovered": 1, 00:26:19.211 "num_base_bdevs_operational": 1, 00:26:19.211 "base_bdevs_list": [ 00:26:19.211 { 00:26:19.211 "name": null, 00:26:19.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.211 "is_configured": false, 00:26:19.211 "data_offset": 256, 00:26:19.211 "data_size": 7936 00:26:19.211 }, 00:26:19.211 { 00:26:19.211 "name": "BaseBdev2", 00:26:19.211 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:19.211 "is_configured": true, 00:26:19.211 "data_offset": 256, 00:26:19.211 "data_size": 7936 00:26:19.211 } 00:26:19.211 ] 00:26:19.211 }' 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:19.211 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:19.469 [2024-05-15 03:20:50.519828] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:19.469 [2024-05-15 03:20:50.519956] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:19.469 [2024-05-15 03:20:50.519969] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:19.469 request: 00:26:19.469 { 00:26:19.469 "raid_bdev": "raid_bdev1", 00:26:19.469 "base_bdev": "BaseBdev1", 00:26:19.469 "method": "bdev_raid_add_base_bdev", 00:26:19.469 "req_id": 1 00:26:19.469 } 00:26:19.469 Got JSON-RPC error response 00:26:19.469 response: 00:26:19.469 { 00:26:19.469 "code": -22, 00:26:19.469 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:19.469 } 00:26:19.469 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:26:19.469 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:19.469 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:19.469 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:19.469 03:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # sleep 1 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.403 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.662 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:20.662 "name": "raid_bdev1", 00:26:20.662 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:20.662 "strip_size_kb": 0, 00:26:20.662 "state": "online", 00:26:20.662 "raid_level": "raid1", 00:26:20.662 "superblock": true, 00:26:20.662 "num_base_bdevs": 2, 00:26:20.662 "num_base_bdevs_discovered": 1, 00:26:20.662 "num_base_bdevs_operational": 1, 00:26:20.662 "base_bdevs_list": [ 00:26:20.662 { 00:26:20.662 "name": null, 00:26:20.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.662 "is_configured": false, 00:26:20.662 "data_offset": 256, 00:26:20.662 "data_size": 7936 00:26:20.662 }, 00:26:20.662 { 00:26:20.662 "name": "BaseBdev2", 00:26:20.662 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:20.662 "is_configured": true, 00:26:20.662 "data_offset": 256, 00:26:20.662 "data_size": 7936 00:26:20.662 } 00:26:20.662 ] 00:26:20.662 }' 00:26:20.662 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:20.662 03:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.596 "name": "raid_bdev1", 00:26:21.596 "uuid": "2108f89b-c3a7-4a53-9ef2-0824b1fa74cc", 00:26:21.596 "strip_size_kb": 0, 00:26:21.596 "state": "online", 00:26:21.596 "raid_level": "raid1", 00:26:21.596 "superblock": true, 00:26:21.596 "num_base_bdevs": 2, 00:26:21.596 "num_base_bdevs_discovered": 1, 00:26:21.596 "num_base_bdevs_operational": 1, 00:26:21.596 "base_bdevs_list": [ 00:26:21.596 { 00:26:21.596 "name": null, 00:26:21.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.596 "is_configured": false, 00:26:21.596 "data_offset": 256, 00:26:21.596 "data_size": 7936 00:26:21.596 }, 00:26:21.596 { 00:26:21.596 "name": "BaseBdev2", 00:26:21.596 "uuid": "83fcb3d5-1454-5c39-b850-caccc379726f", 00:26:21.596 "is_configured": true, 00:26:21.596 "data_offset": 256, 00:26:21.596 "data_size": 7936 00:26:21.596 } 00:26:21.596 ] 00:26:21.596 }' 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:21.596 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # killprocess 23369 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 23369 ']' 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 23369 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 23369 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 23369' 00:26:21.855 killing process with pid 23369 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 23369 00:26:21.855 Received shutdown signal, test time was about 60.000000 seconds 00:26:21.855 00:26:21.855 Latency(us) 00:26:21.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.855 =================================================================================================================== 00:26:21.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:21.855 [2024-05-15 03:20:52.806740] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:21.855 [2024-05-15 03:20:52.806835] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.855 [2024-05-15 03:20:52.806882] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.855 [2024-05-15 03:20:52.806892] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db8b80 name raid_bdev1, state offline 00:26:21.855 03:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 23369 00:26:21.855 [2024-05-15 03:20:52.832399] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:22.113 03:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@797 -- # return 0 00:26:22.113 00:26:22.114 real 0m30.405s 00:26:22.114 user 0m49.658s 00:26:22.114 sys 0m3.168s 00:26:22.114 03:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:22.114 03:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.114 ************************************ 00:26:22.114 END TEST raid_rebuild_test_sb_md_interleaved 00:26:22.114 ************************************ 00:26:22.114 03:20:53 bdev_raid -- bdev/bdev_raid.sh@862 -- # rm -f /raidrandtest 00:26:22.114 00:26:22.114 real 16m39.840s 00:26:22.114 user 29m12.389s 00:26:22.114 sys 2m21.779s 00:26:22.114 03:20:53 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:22.114 03:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:22.114 ************************************ 00:26:22.114 END TEST bdev_raid 00:26:22.114 ************************************ 00:26:22.114 03:20:53 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:26:22.114 03:20:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:22.114 03:20:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:22.114 03:20:53 -- common/autotest_common.sh@10 -- # set +x 00:26:22.114 ************************************ 00:26:22.114 START TEST bdevperf_config 00:26:22.114 ************************************ 00:26:22.114 03:20:53 bdevperf_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:26:22.114 * Looking for test storage... 00:26:22.114 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:22.114 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:22.114 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:22.114 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:22.114 03:20:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:22.373 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:22.373 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:22.373 03:20:53 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:24.905 03:20:55 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-15 03:20:53.337039] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:24.905 [2024-05-15 03:20:53.337098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28707 ] 00:26:24.905 Using job config with 4 jobs 00:26:24.905 [2024-05-15 03:20:53.449645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.905 [2024-05-15 03:20:53.564083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.905 cpumask for '\''job0'\'' is too big 00:26:24.905 cpumask for '\''job1'\'' is too big 00:26:24.905 cpumask for '\''job2'\'' is too big 00:26:24.905 cpumask for '\''job3'\'' is too big 00:26:24.905 Running I/O for 2 seconds... 00:26:24.905 00:26:24.905 Latency(us) 00:26:24.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21572.74 21.07 0.00 0.00 11857.34 2059.70 18100.42 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21551.00 21.05 0.00 0.00 11840.62 2059.70 16103.13 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21529.77 21.03 0.00 0.00 11823.16 2044.10 13981.01 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.03 21602.83 21.10 0.00 0.00 11753.92 998.64 12170.97 00:26:24.905 =================================================================================================================== 00:26:24.905 Total : 86256.34 84.23 0.00 0.00 11818.67 998.64 18100.42' 00:26:24.905 03:20:55 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-15 03:20:53.337039] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:24.905 [2024-05-15 03:20:53.337098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28707 ] 00:26:24.905 Using job config with 4 jobs 00:26:24.905 [2024-05-15 03:20:53.449645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.905 [2024-05-15 03:20:53.564083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.905 cpumask for '\''job0'\'' is too big 00:26:24.905 cpumask for '\''job1'\'' is too big 00:26:24.905 cpumask for '\''job2'\'' is too big 00:26:24.905 cpumask for '\''job3'\'' is too big 00:26:24.905 Running I/O for 2 seconds... 00:26:24.905 00:26:24.905 Latency(us) 00:26:24.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21572.74 21.07 0.00 0.00 11857.34 2059.70 18100.42 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21551.00 21.05 0.00 0.00 11840.62 2059.70 16103.13 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21529.77 21.03 0.00 0.00 11823.16 2044.10 13981.01 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.03 21602.83 21.10 0.00 0.00 11753.92 998.64 12170.97 00:26:24.905 =================================================================================================================== 00:26:24.905 Total : 86256.34 84.23 0.00 0.00 11818.67 998.64 18100.42' 00:26:24.905 03:20:55 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 03:20:53.337039] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:24.905 [2024-05-15 03:20:53.337098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28707 ] 00:26:24.905 Using job config with 4 jobs 00:26:24.905 [2024-05-15 03:20:53.449645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.905 [2024-05-15 03:20:53.564083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.905 cpumask for '\''job0'\'' is too big 00:26:24.905 cpumask for '\''job1'\'' is too big 00:26:24.905 cpumask for '\''job2'\'' is too big 00:26:24.905 cpumask for '\''job3'\'' is too big 00:26:24.905 Running I/O for 2 seconds... 00:26:24.905 00:26:24.905 Latency(us) 00:26:24.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21572.74 21.07 0.00 0.00 11857.34 2059.70 18100.42 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21551.00 21.05 0.00 0.00 11840.62 2059.70 16103.13 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.02 21529.77 21.03 0.00 0.00 11823.16 2044.10 13981.01 00:26:24.905 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:24.905 Malloc0 : 2.03 21602.83 21.10 0.00 0.00 11753.92 998.64 12170.97 00:26:24.905 =================================================================================================================== 00:26:24.905 Total : 86256.34 84.23 0.00 0.00 11818.67 998.64 18100.42' 00:26:24.905 03:20:55 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:24.905 03:20:55 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:24.905 03:20:56 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:24.906 03:20:56 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:24.906 [2024-05-15 03:20:56.056482] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:24.906 [2024-05-15 03:20:56.056541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29087 ] 00:26:25.164 [2024-05-15 03:20:56.168603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.164 [2024-05-15 03:20:56.282074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.422 cpumask for 'job0' is too big 00:26:25.422 cpumask for 'job1' is too big 00:26:25.422 cpumask for 'job2' is too big 00:26:25.422 cpumask for 'job3' is too big 00:26:27.955 03:20:58 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:27.955 Running I/O for 2 seconds... 00:26:27.955 00:26:27.955 Latency(us) 00:26:27.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.955 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:27.955 Malloc0 : 2.02 21504.37 21.00 0.00 0.00 11885.20 2090.91 18225.25 00:26:27.955 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:27.955 Malloc0 : 2.03 21483.16 20.98 0.00 0.00 11867.37 2075.31 16227.96 00:26:27.955 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:27.955 Malloc0 : 2.03 21462.01 20.96 0.00 0.00 11849.12 2075.31 14105.84 00:26:27.955 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:27.956 Malloc0 : 2.03 21440.86 20.94 0.00 0.00 11830.46 2075.31 12233.39 00:26:27.956 =================================================================================================================== 00:26:27.956 Total : 85890.40 83.88 0.00 0.00 11858.04 2075.31 18225.25' 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:27.956 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:27.956 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:27.956 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:27.956 03:20:58 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-15 03:20:58.786238] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:30.491 [2024-05-15 03:20:58.786300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29520 ] 00:26:30.491 Using job config with 3 jobs 00:26:30.491 [2024-05-15 03:20:58.898612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.491 [2024-05-15 03:20:59.005840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.491 cpumask for '\''job0'\'' is too big 00:26:30.491 cpumask for '\''job1'\'' is too big 00:26:30.491 cpumask for '\''job2'\'' is too big 00:26:30.491 Running I/O for 2 seconds... 00:26:30.491 00:26:30.491 Latency(us) 00:26:30.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 29009.16 28.33 0.00 0.00 8822.39 2090.91 13044.78 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 28980.15 28.30 0.00 0.00 8808.71 2059.70 10922.67 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.02 29035.44 28.35 0.00 0.00 8770.07 998.64 9112.62 00:26:30.491 =================================================================================================================== 00:26:30.491 Total : 87024.75 84.99 0.00 0.00 8800.34 998.64 13044.78' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-15 03:20:58.786238] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:30.491 [2024-05-15 03:20:58.786300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29520 ] 00:26:30.491 Using job config with 3 jobs 00:26:30.491 [2024-05-15 03:20:58.898612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.491 [2024-05-15 03:20:59.005840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.491 cpumask for '\''job0'\'' is too big 00:26:30.491 cpumask for '\''job1'\'' is too big 00:26:30.491 cpumask for '\''job2'\'' is too big 00:26:30.491 Running I/O for 2 seconds... 00:26:30.491 00:26:30.491 Latency(us) 00:26:30.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 29009.16 28.33 0.00 0.00 8822.39 2090.91 13044.78 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 28980.15 28.30 0.00 0.00 8808.71 2059.70 10922.67 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.02 29035.44 28.35 0.00 0.00 8770.07 998.64 9112.62 00:26:30.491 =================================================================================================================== 00:26:30.491 Total : 87024.75 84.99 0.00 0.00 8800.34 998.64 13044.78' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 03:20:58.786238] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:30.491 [2024-05-15 03:20:58.786300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29520 ] 00:26:30.491 Using job config with 3 jobs 00:26:30.491 [2024-05-15 03:20:58.898612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.491 [2024-05-15 03:20:59.005840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.491 cpumask for '\''job0'\'' is too big 00:26:30.491 cpumask for '\''job1'\'' is too big 00:26:30.491 cpumask for '\''job2'\'' is too big 00:26:30.491 Running I/O for 2 seconds... 00:26:30.491 00:26:30.491 Latency(us) 00:26:30.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 29009.16 28.33 0.00 0.00 8822.39 2090.91 13044.78 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.01 28980.15 28.30 0.00 0.00 8808.71 2059.70 10922.67 00:26:30.491 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:30.491 Malloc0 : 2.02 29035.44 28.35 0.00 0.00 8770.07 998.64 9112.62 00:26:30.491 =================================================================================================================== 00:26:30.491 Total : 87024.75 84.99 0.00 0.00 8800.34 998.64 13044.78' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:30.491 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:30.491 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:30.491 03:21:01 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:30.492 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:30.492 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:26:30.492 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:26:30.492 03:21:01 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:33.774 03:21:04 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-15 03:21:01.515300] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:33.774 [2024-05-15 03:21:01.515360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30076 ] 00:26:33.774 Using job config with 4 jobs 00:26:33.774 [2024-05-15 03:21:01.630672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.774 [2024-05-15 03:21:01.745805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.774 cpumask for '\''job0'\'' is too big 00:26:33.774 cpumask for '\''job1'\'' is too big 00:26:33.774 cpumask for '\''job2'\'' is too big 00:26:33.774 cpumask for '\''job3'\'' is too big 00:26:33.774 Running I/O for 2 seconds... 00:26:33.774 00:26:33.774 Latency(us) 00:26:33.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.774 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc0 : 2.05 10630.76 10.38 0.00 0.00 24074.52 4306.65 37199.48 00:26:33.774 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc1 : 2.05 10620.03 10.37 0.00 0.00 24073.29 5242.88 37199.48 00:26:33.774 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc0 : 2.05 10609.63 10.36 0.00 0.00 24001.30 4244.24 32955.25 00:26:33.774 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc1 : 2.05 10598.98 10.35 0.00 0.00 24001.48 5211.67 32955.25 00:26:33.774 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc0 : 2.06 10588.61 10.34 0.00 0.00 23928.55 4244.24 28586.18 00:26:33.774 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc1 : 2.06 10578.03 10.33 0.00 0.00 23927.69 5180.46 28586.18 00:26:33.774 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc0 : 2.06 10567.67 10.32 0.00 0.00 23854.17 4244.24 24591.60 00:26:33.774 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.774 Malloc1 : 2.06 10557.08 10.31 0.00 0.00 23852.73 5211.67 24591.60 00:26:33.774 =================================================================================================================== 00:26:33.774 Total : 84750.79 82.76 0.00 0.00 23964.21 4244.24 37199.48' 00:26:33.774 03:21:04 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-15 03:21:01.515300] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:33.774 [2024-05-15 03:21:01.515360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30076 ] 00:26:33.774 Using job config with 4 jobs 00:26:33.774 [2024-05-15 03:21:01.630672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.774 [2024-05-15 03:21:01.745805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.774 cpumask for '\''job0'\'' is too big 00:26:33.774 cpumask for '\''job1'\'' is too big 00:26:33.774 cpumask for '\''job2'\'' is too big 00:26:33.774 cpumask for '\''job3'\'' is too big 00:26:33.774 Running I/O for 2 seconds... 00:26:33.774 00:26:33.774 Latency(us) 00:26:33.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.774 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.05 10630.76 10.38 0.00 0.00 24074.52 4306.65 37199.48 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.05 10620.03 10.37 0.00 0.00 24073.29 5242.88 37199.48 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.05 10609.63 10.36 0.00 0.00 24001.30 4244.24 32955.25 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.05 10598.98 10.35 0.00 0.00 24001.48 5211.67 32955.25 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.06 10588.61 10.34 0.00 0.00 23928.55 4244.24 28586.18 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.06 10578.03 10.33 0.00 0.00 23927.69 5180.46 28586.18 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.06 10567.67 10.32 0.00 0.00 23854.17 4244.24 24591.60 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.06 10557.08 10.31 0.00 0.00 23852.73 5211.67 24591.60 00:26:33.775 =================================================================================================================== 00:26:33.775 Total : 84750.79 82.76 0.00 0.00 23964.21 4244.24 37199.48' 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 03:21:01.515300] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:33.775 [2024-05-15 03:21:01.515360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30076 ] 00:26:33.775 Using job config with 4 jobs 00:26:33.775 [2024-05-15 03:21:01.630672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.775 [2024-05-15 03:21:01.745805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.775 cpumask for '\''job0'\'' is too big 00:26:33.775 cpumask for '\''job1'\'' is too big 00:26:33.775 cpumask for '\''job2'\'' is too big 00:26:33.775 cpumask for '\''job3'\'' is too big 00:26:33.775 Running I/O for 2 seconds... 00:26:33.775 00:26:33.775 Latency(us) 00:26:33.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.05 10630.76 10.38 0.00 0.00 24074.52 4306.65 37199.48 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.05 10620.03 10.37 0.00 0.00 24073.29 5242.88 37199.48 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.05 10609.63 10.36 0.00 0.00 24001.30 4244.24 32955.25 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.05 10598.98 10.35 0.00 0.00 24001.48 5211.67 32955.25 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.06 10588.61 10.34 0.00 0.00 23928.55 4244.24 28586.18 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.06 10578.03 10.33 0.00 0.00 23927.69 5180.46 28586.18 00:26:33.775 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc0 : 2.06 10567.67 10.32 0.00 0.00 23854.17 4244.24 24591.60 00:26:33.775 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:33.775 Malloc1 : 2.06 10557.08 10.31 0.00 0.00 23852.73 5211.67 24591.60 00:26:33.775 =================================================================================================================== 00:26:33.775 Total : 84750.79 82.76 0.00 0.00 23964.21 4244.24 37199.48' 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:26:33.775 03:21:04 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:33.775 00:26:33.775 real 0m11.057s 00:26:33.775 user 0m9.919s 00:26:33.775 sys 0m0.966s 00:26:33.775 03:21:04 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:33.775 03:21:04 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.775 ************************************ 00:26:33.775 END TEST bdevperf_config 00:26:33.775 ************************************ 00:26:33.775 03:21:04 -- spdk/autotest.sh@188 -- # uname -s 00:26:33.775 03:21:04 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:26:33.775 03:21:04 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:33.775 03:21:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:33.775 03:21:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:33.775 03:21:04 -- common/autotest_common.sh@10 -- # set +x 00:26:33.775 ************************************ 00:26:33.775 START TEST reactor_set_interrupt 00:26:33.775 ************************************ 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:33.775 * Looking for test storage... 00:26:33.775 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:33.775 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:26:33.775 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:33.775 03:21:04 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:26:33.776 03:21:04 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:26:33.776 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:33.776 #define SPDK_CONFIG_H 00:26:33.776 #define SPDK_CONFIG_APPS 1 00:26:33.776 #define SPDK_CONFIG_ARCH native 00:26:33.776 #undef SPDK_CONFIG_ASAN 00:26:33.776 #undef SPDK_CONFIG_AVAHI 00:26:33.776 #undef SPDK_CONFIG_CET 00:26:33.776 #define SPDK_CONFIG_COVERAGE 1 00:26:33.776 #define SPDK_CONFIG_CROSS_PREFIX 00:26:33.776 #define SPDK_CONFIG_CRYPTO 1 00:26:33.776 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:26:33.776 #undef SPDK_CONFIG_CUSTOMOCF 00:26:33.776 #undef SPDK_CONFIG_DAOS 00:26:33.776 #define SPDK_CONFIG_DAOS_DIR 00:26:33.776 #define SPDK_CONFIG_DEBUG 1 00:26:33.776 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:26:33.776 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:26:33.776 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:33.776 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:33.776 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:33.776 #undef SPDK_CONFIG_DPDK_UADK 00:26:33.776 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:26:33.776 #define SPDK_CONFIG_EXAMPLES 1 00:26:33.776 #undef SPDK_CONFIG_FC 00:26:33.776 #define SPDK_CONFIG_FC_PATH 00:26:33.776 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:33.776 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:33.776 #undef SPDK_CONFIG_FUSE 00:26:33.776 #undef SPDK_CONFIG_FUZZER 00:26:33.776 #define SPDK_CONFIG_FUZZER_LIB 00:26:33.776 #undef SPDK_CONFIG_GOLANG 00:26:33.776 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:33.776 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:33.776 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:33.776 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:26:33.776 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:33.776 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:33.776 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:33.776 #define SPDK_CONFIG_IDXD 1 00:26:33.776 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:33.776 #define SPDK_CONFIG_IPSEC_MB 1 00:26:33.776 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:26:33.776 #define SPDK_CONFIG_ISAL 1 00:26:33.776 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:33.776 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:33.776 #define SPDK_CONFIG_LIBDIR 00:26:33.776 #undef SPDK_CONFIG_LTO 00:26:33.776 #define SPDK_CONFIG_MAX_LCORES 00:26:33.776 #define SPDK_CONFIG_NVME_CUSE 1 00:26:33.776 #undef SPDK_CONFIG_OCF 00:26:33.776 #define SPDK_CONFIG_OCF_PATH 00:26:33.776 #define SPDK_CONFIG_OPENSSL_PATH 00:26:33.776 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:33.776 #define SPDK_CONFIG_PGO_DIR 00:26:33.776 #undef SPDK_CONFIG_PGO_USE 00:26:33.776 #define SPDK_CONFIG_PREFIX /usr/local 00:26:33.776 #undef SPDK_CONFIG_RAID5F 00:26:33.776 #undef SPDK_CONFIG_RBD 00:26:33.776 #define SPDK_CONFIG_RDMA 1 00:26:33.776 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:33.776 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:33.776 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:33.776 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:33.776 #define SPDK_CONFIG_SHARED 1 00:26:33.776 #undef SPDK_CONFIG_SMA 00:26:33.776 #define SPDK_CONFIG_TESTS 1 00:26:33.776 #undef SPDK_CONFIG_TSAN 00:26:33.776 #define SPDK_CONFIG_UBLK 1 00:26:33.776 #define SPDK_CONFIG_UBSAN 1 00:26:33.776 #undef SPDK_CONFIG_UNIT_TESTS 00:26:33.776 #undef SPDK_CONFIG_URING 00:26:33.776 #define SPDK_CONFIG_URING_PATH 00:26:33.776 #undef SPDK_CONFIG_URING_ZNS 00:26:33.776 #undef SPDK_CONFIG_USDT 00:26:33.776 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:26:33.776 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:26:33.776 #undef SPDK_CONFIG_VFIO_USER 00:26:33.776 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:33.776 #define SPDK_CONFIG_VHOST 1 00:26:33.776 #define SPDK_CONFIG_VIRTIO 1 00:26:33.776 #undef SPDK_CONFIG_VTUNE 00:26:33.776 #define SPDK_CONFIG_VTUNE_DIR 00:26:33.776 #define SPDK_CONFIG_WERROR 1 00:26:33.776 #define SPDK_CONFIG_WPDK_DIR 00:26:33.776 #undef SPDK_CONFIG_XNVME 00:26:33.776 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:33.776 03:21:04 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:33.776 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:26:33.776 03:21:04 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.776 03:21:04 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.776 03:21:04 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.777 03:21:04 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.777 03:21:04 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.777 03:21:04 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.777 03:21:04 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:26:33.777 03:21:04 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:26:33.777 03:21:04 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@57 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@61 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@63 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@65 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@67 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@69 -- # : 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@71 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@73 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@75 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@77 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@79 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@81 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@83 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@85 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@87 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@89 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@91 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@93 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@95 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@97 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@99 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@101 -- # : rdma 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@103 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@105 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@107 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@109 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@111 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@113 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@115 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@117 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@119 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@121 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@123 -- # : 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@125 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@127 -- # : 1 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@129 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@131 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@133 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@135 -- # : 0 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@137 -- # : 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@139 -- # : true 00:26:33.777 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@141 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@143 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@145 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@147 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@149 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@151 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@153 -- # : 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@155 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@157 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@159 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@161 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@163 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@168 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@170 -- # : 0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@199 -- # cat 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export valgrind= 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@262 -- # valgrind= 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@268 -- # uname -s 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 1 -eq 1 ]] 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@275 -- # export HUGE_EVEN_ALLOC=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@275 -- # HUGE_EVEN_ALLOC=yes 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@278 -- # MAKE=make 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@298 -- # TEST_MODE= 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@317 -- # [[ -z 30597 ]] 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@317 -- # kill -0 30597 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local mount target_dir 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:26:33.778 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.unjcuk 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.unjcuk/tests/interrupt /tmp/spdk.unjcuk 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@326 -- # df -T 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=918523904 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4365905920 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=84103168000 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=95562719232 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=11459551232 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=47776555008 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=47781359616 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4804608 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=19102871552 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=19112546304 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=9674752 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=47780581376 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=47781359616 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=778240 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=9556267008 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=9556271104 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:26:33.779 * Looking for test storage... 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@367 -- # local target_space new_size 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@371 -- # mount=/ 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@373 -- # target_space=84103168000 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@380 -- # new_size=13674143744 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.779 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@388 -- # return 0 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set -o errtrace 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # true 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # xtrace_fd 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:33.779 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:33.779 03:21:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=30664 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 30664 /var/tmp/spdk.sock 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 30664 ']' 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:33.780 [2024-05-15 03:21:04.515008] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:33.780 [2024-05-15 03:21:04.515065] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30664 ] 00:26:33.780 [2024-05-15 03:21:04.614266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.780 [2024-05-15 03:21:04.709448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.780 [2024-05-15 03:21:04.709552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.780 [2024-05-15 03:21:04.709556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.780 [2024-05-15 03:21:04.780881] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.780 03:21:04 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:33.780 03:21:04 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:34.038 Malloc0 00:26:34.038 Malloc1 00:26:34.038 Malloc2 00:26:34.038 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:34.038 03:21:05 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:26:34.038 03:21:05 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:34.038 03:21:05 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:34.038 5000+0 records in 00:26:34.038 5000+0 records out 00:26:34.038 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0163783 s, 625 MB/s 00:26:34.038 03:21:05 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:26:34.296 AIO0 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 30664 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 30664 without_thd 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=30664 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:26:34.296 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:34.555 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:26:34.555 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:34.555 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:34.555 03:21:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:26:34.555 03:21:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:26:34.556 03:21:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:26:34.556 03:21:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:34.556 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:26:34.556 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:34.814 spdk_thread ids are 1 on reactor0. 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 30664 0 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 30664 0 idle 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:34.814 03:21:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30664 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.34 reactor_0' 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30664 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.34 reactor_0 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 30664 1 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 30664 1 idle 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:35.073 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30710 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_1' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30710 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_1 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 30664 2 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 30664 2 idle 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30711 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_2' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30711 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_2 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:35.332 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:35.591 [2024-05-15 03:21:06.670303] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:35.591 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:35.850 [2024-05-15 03:21:06.922044] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:35.850 [2024-05-15 03:21:06.922282] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:35.850 03:21:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:36.107 [2024-05-15 03:21:07.174030] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:36.107 [2024-05-15 03:21:07.174211] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 30664 0 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 30664 0 busy 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:36.107 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30664 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.78 reactor_0' 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30664 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.78 reactor_0 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 30664 2 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 30664 2 busy 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:36.368 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30711 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.36 reactor_2' 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30711 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.36 reactor_2 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:36.663 [2024-05-15 03:21:07.706025] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:36.663 [2024-05-15 03:21:07.706131] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 30664 2 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 30664 2 idle 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:36.663 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30711 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.53 reactor_2' 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30711 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.53 reactor_2 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:36.922 03:21:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:37.180 [2024-05-15 03:21:08.130026] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:37.180 [2024-05-15 03:21:08.130160] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:37.180 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:37.180 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:37.180 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:37.439 [2024-05-15 03:21:08.382376] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:37.439 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 30664 0 00:26:37.439 03:21:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 30664 0 idle 00:26:37.439 03:21:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=30664 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 30664 -w 256 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 30664 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:01.54 reactor_0' 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 30664 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:01.54 reactor_0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:37.440 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 30664 00:26:37.440 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 30664 ']' 00:26:37.440 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 30664 00:26:37.440 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:26:37.440 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:37.440 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 30664 00:26:37.699 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:37.699 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:37.699 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 30664' 00:26:37.699 killing process with pid 30664 00:26:37.699 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 30664 00:26:37.699 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 30664 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=31927 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:37.957 03:21:08 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 31927 /var/tmp/spdk.sock 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 31927 ']' 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.957 03:21:08 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:37.957 [2024-05-15 03:21:08.917315] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:37.957 [2024-05-15 03:21:08.917372] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31927 ] 00:26:37.957 [2024-05-15 03:21:09.016086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:37.957 [2024-05-15 03:21:09.107787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.957 [2024-05-15 03:21:09.107884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.957 [2024-05-15 03:21:09.107890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.216 [2024-05-15 03:21:09.179004] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:38.781 03:21:09 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:38.781 03:21:09 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:26:38.781 03:21:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:38.781 03:21:09 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:39.039 Malloc0 00:26:39.039 Malloc1 00:26:39.039 Malloc2 00:26:39.039 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:39.039 03:21:10 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:26:39.039 03:21:10 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:39.039 03:21:10 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:39.039 5000+0 records in 00:26:39.039 5000+0 records out 00:26:39.039 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174563 s, 587 MB/s 00:26:39.039 03:21:10 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:26:39.297 AIO0 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 31927 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 31927 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=31927 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:26:39.297 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:26:39.555 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:39.814 spdk_thread ids are 1 on reactor0. 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 31927 0 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 31927 0 idle 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:39.814 03:21:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31927 root 20 0 128.2g 36096 23040 S 6.7 0.0 0:00.34 reactor_0' 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31927 root 20 0 128.2g 36096 23040 S 6.7 0.0 0:00.34 reactor_0 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 31927 1 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 31927 1 idle 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:40.073 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31982 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_1' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31982 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_1 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 31927 2 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 31927 2 idle 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31983 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_2' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31983 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.00 reactor_2 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:40.332 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:40.591 [2024-05-15 03:21:11.604531] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:40.591 [2024-05-15 03:21:11.604685] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:40.591 [2024-05-15 03:21:11.604751] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:40.591 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:40.850 [2024-05-15 03:21:11.780916] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:40.850 [2024-05-15 03:21:11.781011] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 31927 0 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 31927 0 busy 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31927 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.70 reactor_0' 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31927 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.70 reactor_0 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 31927 2 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 31927 2 busy 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:40.850 03:21:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31983 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.35 reactor_2' 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31983 root 20 0 128.2g 36096 23040 R 99.9 0.0 0:00.35 reactor_2 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:41.110 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:41.369 [2024-05-15 03:21:12.390677] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:41.369 [2024-05-15 03:21:12.390764] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 31927 2 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 31927 2 idle 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:41.369 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31983 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.60 reactor_2' 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31983 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:00.60 reactor_2 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:41.628 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:41.887 [2024-05-15 03:21:12.823797] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:41.887 [2024-05-15 03:21:12.823921] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:41.887 [2024-05-15 03:21:12.823938] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 31927 0 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 31927 0 idle 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=31927 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 31927 -w 256 00:26:41.887 03:21:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 31927 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:01.56 reactor_0' 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 31927 root 20 0 128.2g 36096 23040 S 0.0 0.0 0:01.56 reactor_0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:41.887 03:21:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 31927 00:26:41.887 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 31927 ']' 00:26:41.887 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 31927 00:26:41.887 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:26:41.887 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:41.887 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 31927 00:26:42.146 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:42.146 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:42.146 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 31927' 00:26:42.146 killing process with pid 31927 00:26:42.146 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 31927 00:26:42.146 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 31927 00:26:42.407 03:21:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:42.407 03:21:13 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:26:42.407 00:26:42.407 real 0m9.053s 00:26:42.407 user 0m9.025s 00:26:42.407 sys 0m1.696s 00:26:42.407 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:42.407 03:21:13 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.407 ************************************ 00:26:42.407 END TEST reactor_set_interrupt 00:26:42.407 ************************************ 00:26:42.407 03:21:13 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.407 03:21:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:42.407 03:21:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:42.407 03:21:13 -- common/autotest_common.sh@10 -- # set +x 00:26:42.407 ************************************ 00:26:42.407 START TEST reap_unregistered_poller 00:26:42.407 ************************************ 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.407 * Looking for test storage... 00:26:42.407 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:42.407 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:26:42.407 03:21:13 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:26:42.407 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:26:42.407 03:21:13 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:42.408 #define SPDK_CONFIG_H 00:26:42.408 #define SPDK_CONFIG_APPS 1 00:26:42.408 #define SPDK_CONFIG_ARCH native 00:26:42.408 #undef SPDK_CONFIG_ASAN 00:26:42.408 #undef SPDK_CONFIG_AVAHI 00:26:42.408 #undef SPDK_CONFIG_CET 00:26:42.408 #define SPDK_CONFIG_COVERAGE 1 00:26:42.408 #define SPDK_CONFIG_CROSS_PREFIX 00:26:42.408 #define SPDK_CONFIG_CRYPTO 1 00:26:42.408 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:26:42.408 #undef SPDK_CONFIG_CUSTOMOCF 00:26:42.408 #undef SPDK_CONFIG_DAOS 00:26:42.408 #define SPDK_CONFIG_DAOS_DIR 00:26:42.408 #define SPDK_CONFIG_DEBUG 1 00:26:42.408 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:26:42.408 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:26:42.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:42.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:42.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:42.408 #undef SPDK_CONFIG_DPDK_UADK 00:26:42.408 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:26:42.408 #define SPDK_CONFIG_EXAMPLES 1 00:26:42.408 #undef SPDK_CONFIG_FC 00:26:42.408 #define SPDK_CONFIG_FC_PATH 00:26:42.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:42.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:42.408 #undef SPDK_CONFIG_FUSE 00:26:42.408 #undef SPDK_CONFIG_FUZZER 00:26:42.408 #define SPDK_CONFIG_FUZZER_LIB 00:26:42.408 #undef SPDK_CONFIG_GOLANG 00:26:42.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:42.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:42.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:42.408 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:26:42.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:42.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:42.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:42.408 #define SPDK_CONFIG_IDXD 1 00:26:42.408 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:42.408 #define SPDK_CONFIG_IPSEC_MB 1 00:26:42.408 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:26:42.408 #define SPDK_CONFIG_ISAL 1 00:26:42.408 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:42.408 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:42.408 #define SPDK_CONFIG_LIBDIR 00:26:42.408 #undef SPDK_CONFIG_LTO 00:26:42.408 #define SPDK_CONFIG_MAX_LCORES 00:26:42.408 #define SPDK_CONFIG_NVME_CUSE 1 00:26:42.408 #undef SPDK_CONFIG_OCF 00:26:42.408 #define SPDK_CONFIG_OCF_PATH 00:26:42.408 #define SPDK_CONFIG_OPENSSL_PATH 00:26:42.408 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:42.408 #define SPDK_CONFIG_PGO_DIR 00:26:42.408 #undef SPDK_CONFIG_PGO_USE 00:26:42.408 #define SPDK_CONFIG_PREFIX /usr/local 00:26:42.408 #undef SPDK_CONFIG_RAID5F 00:26:42.408 #undef SPDK_CONFIG_RBD 00:26:42.408 #define SPDK_CONFIG_RDMA 1 00:26:42.408 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:42.408 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:42.408 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:42.408 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:42.408 #define SPDK_CONFIG_SHARED 1 00:26:42.408 #undef SPDK_CONFIG_SMA 00:26:42.408 #define SPDK_CONFIG_TESTS 1 00:26:42.408 #undef SPDK_CONFIG_TSAN 00:26:42.408 #define SPDK_CONFIG_UBLK 1 00:26:42.408 #define SPDK_CONFIG_UBSAN 1 00:26:42.408 #undef SPDK_CONFIG_UNIT_TESTS 00:26:42.408 #undef SPDK_CONFIG_URING 00:26:42.408 #define SPDK_CONFIG_URING_PATH 00:26:42.408 #undef SPDK_CONFIG_URING_ZNS 00:26:42.408 #undef SPDK_CONFIG_USDT 00:26:42.408 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:26:42.408 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:26:42.408 #undef SPDK_CONFIG_VFIO_USER 00:26:42.408 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:42.408 #define SPDK_CONFIG_VHOST 1 00:26:42.408 #define SPDK_CONFIG_VIRTIO 1 00:26:42.408 #undef SPDK_CONFIG_VTUNE 00:26:42.408 #define SPDK_CONFIG_VTUNE_DIR 00:26:42.408 #define SPDK_CONFIG_WERROR 1 00:26:42.408 #define SPDK_CONFIG_WPDK_DIR 00:26:42.408 #undef SPDK_CONFIG_XNVME 00:26:42.408 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:42.408 03:21:13 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:42.408 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:26:42.408 03:21:13 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.408 03:21:13 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.408 03:21:13 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.408 03:21:13 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.408 03:21:13 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.408 03:21:13 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.408 03:21:13 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:26:42.408 03:21:13 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.408 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:26:42.408 03:21:13 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:26:42.667 03:21:13 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@57 -- # : 0 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@61 -- # : 0 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@63 -- # : 0 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@65 -- # : 1 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@67 -- # : 0 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@69 -- # : 00:26:42.667 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@71 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@73 -- # : 1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@75 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@77 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@79 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@81 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@83 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@85 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@87 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@89 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@91 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@93 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@95 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@97 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@99 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@101 -- # : rdma 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@103 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@105 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@107 -- # : 1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@109 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@111 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@113 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@115 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@117 -- # : 1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@119 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@121 -- # : 1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@123 -- # : 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@125 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@127 -- # : 1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@129 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@131 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@133 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@135 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@137 -- # : 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@139 -- # : true 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@141 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@143 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@145 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@147 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@149 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@151 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@153 -- # : 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@155 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@157 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@159 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@161 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@163 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@168 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@170 -- # : 0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@199 -- # cat 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export valgrind= 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@262 -- # valgrind= 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@268 -- # uname -s 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 1 -eq 1 ]] 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@275 -- # export HUGE_EVEN_ALLOC=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@275 -- # HUGE_EVEN_ALLOC=yes 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@278 -- # MAKE=make 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@298 -- # TEST_MODE= 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@317 -- # [[ -z 32774 ]] 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@317 -- # kill -0 32774 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local mount target_dir 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.BbQKZb 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.BbQKZb/tests/interrupt /tmp/spdk.BbQKZb 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@326 -- # df -T 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=918523904 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4365905920 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=84103008256 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=95562719232 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=11459710976 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=47776555008 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=47781359616 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4804608 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=19102871552 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=19112546304 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=9674752 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=47780581376 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=47781359616 00:26:42.668 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=778240 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=9556267008 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=9556271104 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:26:42.669 * Looking for test storage... 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@367 -- # local target_space new_size 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@371 -- # mount=/ 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@373 -- # target_space=84103008256 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@380 -- # new_size=13674303488 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.669 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@388 -- # return 0 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set -o errtrace 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # true 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # xtrace_fd 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=32826 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:42.669 03:21:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 32826 /var/tmp/spdk.sock 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@827 -- # '[' -z 32826 ']' 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:42.669 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:26:42.669 [2024-05-15 03:21:13.708377] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:42.669 [2024-05-15 03:21:13.708432] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32826 ] 00:26:42.669 [2024-05-15 03:21:13.807173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:42.926 [2024-05-15 03:21:13.904495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.927 [2024-05-15 03:21:13.904591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.927 [2024-05-15 03:21:13.904595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.927 [2024-05-15 03:21:13.974602] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:42.927 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.927 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@860 -- # return 0 00:26:42.927 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:42.927 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.927 03:21:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:26:42.927 03:21:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:42.927 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.927 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:42.927 "name": "app_thread", 00:26:42.927 "id": 1, 00:26:42.927 "active_pollers": [], 00:26:42.927 "timed_pollers": [ 00:26:42.927 { 00:26:42.927 "name": "rpc_subsystem_poll_servers", 00:26:42.927 "id": 1, 00:26:42.927 "state": "waiting", 00:26:42.927 "run_count": 0, 00:26:42.927 "busy_count": 0, 00:26:42.927 "period_ticks": 8400000 00:26:42.927 } 00:26:42.927 ], 00:26:42.927 "paused_pollers": [] 00:26:42.927 }' 00:26:42.927 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:43.185 03:21:14 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:43.185 5000+0 records in 00:26:43.185 5000+0 records out 00:26:43.185 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178954 s, 572 MB/s 00:26:43.186 03:21:14 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:26:43.444 AIO0 00:26:43.444 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:43.703 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:43.703 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:43.703 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:43.703 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.703 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:26:43.703 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.703 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:43.703 "name": "app_thread", 00:26:43.703 "id": 1, 00:26:43.703 "active_pollers": [], 00:26:43.703 "timed_pollers": [ 00:26:43.703 { 00:26:43.703 "name": "rpc_subsystem_poll_servers", 00:26:43.703 "id": 1, 00:26:43.703 "state": "waiting", 00:26:43.703 "run_count": 0, 00:26:43.703 "busy_count": 0, 00:26:43.703 "period_ticks": 8400000 00:26:43.703 } 00:26:43.703 ], 00:26:43.703 "paused_pollers": [] 00:26:43.703 }' 00:26:43.703 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:43.962 03:21:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 32826 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@946 -- # '[' -z 32826 ']' 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@950 -- # kill -0 32826 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@951 -- # uname 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 32826 00:26:43.962 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:43.963 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:43.963 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 32826' 00:26:43.963 killing process with pid 32826 00:26:43.963 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@965 -- # kill 32826 00:26:43.963 03:21:14 reap_unregistered_poller -- common/autotest_common.sh@970 -- # wait 32826 00:26:44.222 03:21:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:44.222 03:21:15 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:26:44.222 00:26:44.222 real 0m1.767s 00:26:44.222 user 0m1.407s 00:26:44.222 sys 0m0.481s 00:26:44.222 03:21:15 reap_unregistered_poller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:44.222 03:21:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:26:44.222 ************************************ 00:26:44.222 END TEST reap_unregistered_poller 00:26:44.222 ************************************ 00:26:44.222 03:21:15 -- spdk/autotest.sh@194 -- # uname -s 00:26:44.222 03:21:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:26:44.222 03:21:15 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:26:44.222 03:21:15 -- spdk/autotest.sh@201 -- # [[ 1 -eq 0 ]] 00:26:44.222 03:21:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:26:44.222 03:21:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.222 03:21:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.222 03:21:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:26:44.222 03:21:15 -- spdk/autotest.sh@344 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:26:44.222 03:21:15 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:44.222 03:21:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:44.222 03:21:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.222 ************************************ 00:26:44.222 START TEST compress_compdev 00:26:44.222 ************************************ 00:26:44.222 03:21:15 compress_compdev -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:26:44.481 * Looking for test storage... 00:26:44.482 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:26:44.482 03:21:15 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.482 03:21:15 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.482 03:21:15 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.482 03:21:15 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.482 03:21:15 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.482 03:21:15 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.482 03:21:15 compress_compdev -- paths/export.sh@5 -- # export PATH 00:26:44.482 03:21:15 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.482 03:21:15 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=33141 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 33141 00:26:44.482 03:21:15 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@827 -- # '[' -z 33141 ']' 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:44.482 03:21:15 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:26:44.482 [2024-05-15 03:21:15.492691] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:44.482 [2024-05-15 03:21:15.492747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33141 ] 00:26:44.482 [2024-05-15 03:21:15.584228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:44.741 [2024-05-15 03:21:15.679046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.741 [2024-05-15 03:21:15.679053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.309 [2024-05-15 03:21:16.230211] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:26:45.309 03:21:16 compress_compdev -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:45.309 03:21:16 compress_compdev -- common/autotest_common.sh@860 -- # return 0 00:26:45.309 03:21:16 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:26:45.309 03:21:16 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:45.309 03:21:16 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:48.595 [2024-05-15 03:21:19.538980] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1452740 PMD being used: compress_qat 00:26:48.595 03:21:19 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:48.595 03:21:19 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:48.853 03:21:19 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:26:49.112 [ 00:26:49.112 { 00:26:49.112 "name": "Nvme0n1", 00:26:49.112 "aliases": [ 00:26:49.112 "1730783a-8313-4ece-b126-90d76fe66687" 00:26:49.112 ], 00:26:49.112 "product_name": "NVMe disk", 00:26:49.112 "block_size": 512, 00:26:49.112 "num_blocks": 1953525168, 00:26:49.112 "uuid": "1730783a-8313-4ece-b126-90d76fe66687", 00:26:49.112 "assigned_rate_limits": { 00:26:49.112 "rw_ios_per_sec": 0, 00:26:49.112 "rw_mbytes_per_sec": 0, 00:26:49.112 "r_mbytes_per_sec": 0, 00:26:49.112 "w_mbytes_per_sec": 0 00:26:49.112 }, 00:26:49.112 "claimed": false, 00:26:49.112 "zoned": false, 00:26:49.112 "supported_io_types": { 00:26:49.112 "read": true, 00:26:49.112 "write": true, 00:26:49.112 "unmap": true, 00:26:49.112 "write_zeroes": true, 00:26:49.112 "flush": true, 00:26:49.112 "reset": true, 00:26:49.112 "compare": false, 00:26:49.112 "compare_and_write": false, 00:26:49.112 "abort": true, 00:26:49.112 "nvme_admin": true, 00:26:49.112 "nvme_io": true 00:26:49.112 }, 00:26:49.112 "driver_specific": { 00:26:49.112 "nvme": [ 00:26:49.112 { 00:26:49.112 "pci_address": "0000:5e:00.0", 00:26:49.112 "trid": { 00:26:49.112 "trtype": "PCIe", 00:26:49.112 "traddr": "0000:5e:00.0" 00:26:49.112 }, 00:26:49.112 "ctrlr_data": { 00:26:49.112 "cntlid": 0, 00:26:49.112 "vendor_id": "0x8086", 00:26:49.112 "model_number": "INTEL SSDPE2KX010T8", 00:26:49.112 "serial_number": "BTLJ807001JM1P0FGN", 00:26:49.112 "firmware_revision": "VDV10170", 00:26:49.112 "oacs": { 00:26:49.112 "security": 1, 00:26:49.112 "format": 1, 00:26:49.112 "firmware": 1, 00:26:49.112 "ns_manage": 1 00:26:49.112 }, 00:26:49.112 "multi_ctrlr": false, 00:26:49.112 "ana_reporting": false 00:26:49.112 }, 00:26:49.112 "vs": { 00:26:49.112 "nvme_version": "1.2" 00:26:49.112 }, 00:26:49.112 "ns_data": { 00:26:49.112 "id": 1, 00:26:49.112 "can_share": false 00:26:49.112 }, 00:26:49.112 "security": { 00:26:49.112 "opal": true 00:26:49.112 } 00:26:49.112 } 00:26:49.112 ], 00:26:49.112 "mp_policy": "active_passive" 00:26:49.112 } 00:26:49.112 } 00:26:49.112 ] 00:26:49.112 03:21:20 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:26:49.112 03:21:20 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:26:49.370 [2024-05-15 03:21:20.312777] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x128a750 PMD being used: compress_qat 00:26:50.305 cdbacb4d-c7f8-4d79-bb92-e97a5c35b069 00:26:50.305 03:21:21 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:26:50.305 d87020f3-15fb-4bac-8c5f-1f876fa15fa3 00:26:50.563 03:21:21 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:50.563 03:21:21 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:26:50.822 [ 00:26:50.822 { 00:26:50.822 "name": "d87020f3-15fb-4bac-8c5f-1f876fa15fa3", 00:26:50.822 "aliases": [ 00:26:50.822 "lvs0/lv0" 00:26:50.822 ], 00:26:50.822 "product_name": "Logical Volume", 00:26:50.822 "block_size": 512, 00:26:50.822 "num_blocks": 204800, 00:26:50.822 "uuid": "d87020f3-15fb-4bac-8c5f-1f876fa15fa3", 00:26:50.822 "assigned_rate_limits": { 00:26:50.822 "rw_ios_per_sec": 0, 00:26:50.822 "rw_mbytes_per_sec": 0, 00:26:50.822 "r_mbytes_per_sec": 0, 00:26:50.822 "w_mbytes_per_sec": 0 00:26:50.822 }, 00:26:50.822 "claimed": false, 00:26:50.822 "zoned": false, 00:26:50.822 "supported_io_types": { 00:26:50.822 "read": true, 00:26:50.822 "write": true, 00:26:50.822 "unmap": true, 00:26:50.822 "write_zeroes": true, 00:26:50.822 "flush": false, 00:26:50.822 "reset": true, 00:26:50.822 "compare": false, 00:26:50.822 "compare_and_write": false, 00:26:50.822 "abort": false, 00:26:50.822 "nvme_admin": false, 00:26:50.822 "nvme_io": false 00:26:50.822 }, 00:26:50.822 "driver_specific": { 00:26:50.822 "lvol": { 00:26:50.822 "lvol_store_uuid": "cdbacb4d-c7f8-4d79-bb92-e97a5c35b069", 00:26:50.822 "base_bdev": "Nvme0n1", 00:26:50.822 "thin_provision": true, 00:26:50.822 "num_allocated_clusters": 0, 00:26:50.822 "snapshot": false, 00:26:50.822 "clone": false, 00:26:50.822 "esnap_clone": false 00:26:50.822 } 00:26:50.822 } 00:26:50.822 } 00:26:50.822 ] 00:26:50.822 03:21:21 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:26:50.822 03:21:21 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:26:50.822 03:21:21 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:26:51.080 [2024-05-15 03:21:22.209457] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:26:51.080 COMP_lvs0/lv0 00:26:51.080 03:21:22 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:51.080 03:21:22 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:51.338 03:21:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:26:51.596 [ 00:26:51.596 { 00:26:51.596 "name": "COMP_lvs0/lv0", 00:26:51.596 "aliases": [ 00:26:51.596 "88335a19-3e83-5adb-8e7f-ffbcc29863f2" 00:26:51.596 ], 00:26:51.596 "product_name": "compress", 00:26:51.596 "block_size": 512, 00:26:51.596 "num_blocks": 200704, 00:26:51.596 "uuid": "88335a19-3e83-5adb-8e7f-ffbcc29863f2", 00:26:51.596 "assigned_rate_limits": { 00:26:51.596 "rw_ios_per_sec": 0, 00:26:51.596 "rw_mbytes_per_sec": 0, 00:26:51.596 "r_mbytes_per_sec": 0, 00:26:51.596 "w_mbytes_per_sec": 0 00:26:51.596 }, 00:26:51.596 "claimed": false, 00:26:51.596 "zoned": false, 00:26:51.596 "supported_io_types": { 00:26:51.596 "read": true, 00:26:51.596 "write": true, 00:26:51.596 "unmap": false, 00:26:51.596 "write_zeroes": true, 00:26:51.596 "flush": false, 00:26:51.596 "reset": false, 00:26:51.596 "compare": false, 00:26:51.596 "compare_and_write": false, 00:26:51.596 "abort": false, 00:26:51.596 "nvme_admin": false, 00:26:51.596 "nvme_io": false 00:26:51.596 }, 00:26:51.596 "driver_specific": { 00:26:51.596 "compress": { 00:26:51.597 "name": "COMP_lvs0/lv0", 00:26:51.597 "base_bdev_name": "d87020f3-15fb-4bac-8c5f-1f876fa15fa3" 00:26:51.597 } 00:26:51.597 } 00:26:51.597 } 00:26:51.597 ] 00:26:51.597 03:21:22 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:26:51.597 03:21:22 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:51.854 [2024-05-15 03:21:22.851796] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f28881b15a0 PMD being used: compress_qat 00:26:51.854 [2024-05-15 03:21:22.853810] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1447970 PMD being used: compress_qat 00:26:51.854 Running I/O for 3 seconds... 00:26:55.135 00:26:55.135 Latency(us) 00:26:55.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.135 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:26:55.135 Verification LBA range: start 0x0 length 0x3100 00:26:55.135 COMP_lvs0/lv0 : 3.01 3967.66 15.50 0.00 0.00 8006.47 130.68 13668.94 00:26:55.135 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:26:55.135 Verification LBA range: start 0x3100 length 0x3100 00:26:55.135 COMP_lvs0/lv0 : 3.01 4099.60 16.01 0.00 0.00 7759.39 119.95 13169.62 00:26:55.135 =================================================================================================================== 00:26:55.135 Total : 8067.27 31.51 0.00 0.00 7880.86 119.95 13668.94 00:26:55.135 0 00:26:55.135 03:21:25 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:26:55.135 03:21:25 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:26:55.135 03:21:26 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:26:55.393 03:21:26 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:55.393 03:21:26 compress_compdev -- compress/compress.sh@78 -- # killprocess 33141 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@946 -- # '[' -z 33141 ']' 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@950 -- # kill -0 33141 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@951 -- # uname 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 33141 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@964 -- # echo 'killing process with pid 33141' 00:26:55.393 killing process with pid 33141 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@965 -- # kill 33141 00:26:55.393 Received shutdown signal, test time was about 3.000000 seconds 00:26:55.393 00:26:55.393 Latency(us) 00:26:55.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.393 =================================================================================================================== 00:26:55.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.393 03:21:26 compress_compdev -- common/autotest_common.sh@970 -- # wait 33141 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=35204 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:26:56.826 03:21:27 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 35204 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@827 -- # '[' -z 35204 ']' 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:56.826 03:21:27 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 [2024-05-15 03:21:28.014805] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:57.085 [2024-05-15 03:21:28.014878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35204 ] 00:26:57.085 [2024-05-15 03:21:28.105692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:57.085 [2024-05-15 03:21:28.200021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.085 [2024-05-15 03:21:28.200028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.653 [2024-05-15 03:21:28.752794] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:26:57.911 03:21:28 compress_compdev -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.911 03:21:28 compress_compdev -- common/autotest_common.sh@860 -- # return 0 00:26:57.911 03:21:28 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:26:57.911 03:21:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:57.911 03:21:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:01.196 [2024-05-15 03:21:32.058934] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bf1740 PMD being used: compress_qat 00:27:01.196 03:21:32 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:27:01.196 03:21:32 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:27:01.196 03:21:32 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:01.196 03:21:32 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:01.197 03:21:32 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:01.197 03:21:32 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:01.197 03:21:32 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:01.197 03:21:32 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:27:01.455 [ 00:27:01.455 { 00:27:01.455 "name": "Nvme0n1", 00:27:01.455 "aliases": [ 00:27:01.455 "22b659b0-a6b2-450a-ad64-17d5e7a206e5" 00:27:01.455 ], 00:27:01.455 "product_name": "NVMe disk", 00:27:01.455 "block_size": 512, 00:27:01.455 "num_blocks": 1953525168, 00:27:01.455 "uuid": "22b659b0-a6b2-450a-ad64-17d5e7a206e5", 00:27:01.455 "assigned_rate_limits": { 00:27:01.455 "rw_ios_per_sec": 0, 00:27:01.455 "rw_mbytes_per_sec": 0, 00:27:01.455 "r_mbytes_per_sec": 0, 00:27:01.455 "w_mbytes_per_sec": 0 00:27:01.455 }, 00:27:01.455 "claimed": false, 00:27:01.455 "zoned": false, 00:27:01.455 "supported_io_types": { 00:27:01.455 "read": true, 00:27:01.455 "write": true, 00:27:01.455 "unmap": true, 00:27:01.455 "write_zeroes": true, 00:27:01.455 "flush": true, 00:27:01.455 "reset": true, 00:27:01.455 "compare": false, 00:27:01.455 "compare_and_write": false, 00:27:01.455 "abort": true, 00:27:01.455 "nvme_admin": true, 00:27:01.455 "nvme_io": true 00:27:01.455 }, 00:27:01.455 "driver_specific": { 00:27:01.455 "nvme": [ 00:27:01.455 { 00:27:01.455 "pci_address": "0000:5e:00.0", 00:27:01.455 "trid": { 00:27:01.455 "trtype": "PCIe", 00:27:01.455 "traddr": "0000:5e:00.0" 00:27:01.455 }, 00:27:01.455 "ctrlr_data": { 00:27:01.455 "cntlid": 0, 00:27:01.455 "vendor_id": "0x8086", 00:27:01.456 "model_number": "INTEL SSDPE2KX010T8", 00:27:01.456 "serial_number": "BTLJ807001JM1P0FGN", 00:27:01.456 "firmware_revision": "VDV10170", 00:27:01.456 "oacs": { 00:27:01.456 "security": 1, 00:27:01.456 "format": 1, 00:27:01.456 "firmware": 1, 00:27:01.456 "ns_manage": 1 00:27:01.456 }, 00:27:01.456 "multi_ctrlr": false, 00:27:01.456 "ana_reporting": false 00:27:01.456 }, 00:27:01.456 "vs": { 00:27:01.456 "nvme_version": "1.2" 00:27:01.456 }, 00:27:01.456 "ns_data": { 00:27:01.456 "id": 1, 00:27:01.456 "can_share": false 00:27:01.456 }, 00:27:01.456 "security": { 00:27:01.456 "opal": true 00:27:01.456 } 00:27:01.456 } 00:27:01.456 ], 00:27:01.456 "mp_policy": "active_passive" 00:27:01.456 } 00:27:01.456 } 00:27:01.456 ] 00:27:01.456 03:21:32 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:01.456 03:21:32 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:27:01.744 [2024-05-15 03:21:32.824745] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a29750 PMD being used: compress_qat 00:27:02.678 b4993321-7562-464e-abfb-96f18c48050e 00:27:02.678 03:21:33 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:27:02.936 e3bc7eb8-7004-474d-8f33-a9aafff5ef23 00:27:02.936 03:21:33 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:02.936 03:21:33 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:03.195 03:21:34 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:27:03.454 [ 00:27:03.454 { 00:27:03.454 "name": "e3bc7eb8-7004-474d-8f33-a9aafff5ef23", 00:27:03.454 "aliases": [ 00:27:03.454 "lvs0/lv0" 00:27:03.454 ], 00:27:03.454 "product_name": "Logical Volume", 00:27:03.454 "block_size": 512, 00:27:03.454 "num_blocks": 204800, 00:27:03.454 "uuid": "e3bc7eb8-7004-474d-8f33-a9aafff5ef23", 00:27:03.454 "assigned_rate_limits": { 00:27:03.454 "rw_ios_per_sec": 0, 00:27:03.454 "rw_mbytes_per_sec": 0, 00:27:03.454 "r_mbytes_per_sec": 0, 00:27:03.454 "w_mbytes_per_sec": 0 00:27:03.454 }, 00:27:03.454 "claimed": false, 00:27:03.454 "zoned": false, 00:27:03.454 "supported_io_types": { 00:27:03.454 "read": true, 00:27:03.454 "write": true, 00:27:03.454 "unmap": true, 00:27:03.454 "write_zeroes": true, 00:27:03.454 "flush": false, 00:27:03.454 "reset": true, 00:27:03.454 "compare": false, 00:27:03.454 "compare_and_write": false, 00:27:03.454 "abort": false, 00:27:03.454 "nvme_admin": false, 00:27:03.454 "nvme_io": false 00:27:03.454 }, 00:27:03.454 "driver_specific": { 00:27:03.454 "lvol": { 00:27:03.454 "lvol_store_uuid": "b4993321-7562-464e-abfb-96f18c48050e", 00:27:03.454 "base_bdev": "Nvme0n1", 00:27:03.454 "thin_provision": true, 00:27:03.454 "num_allocated_clusters": 0, 00:27:03.454 "snapshot": false, 00:27:03.454 "clone": false, 00:27:03.454 "esnap_clone": false 00:27:03.454 } 00:27:03.454 } 00:27:03.454 } 00:27:03.454 ] 00:27:03.454 03:21:34 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:03.454 03:21:34 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:27:03.454 03:21:34 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:27:03.713 [2024-05-15 03:21:34.668583] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:27:03.713 COMP_lvs0/lv0 00:27:03.713 03:21:34 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:03.713 03:21:34 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:03.971 03:21:34 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:27:04.229 [ 00:27:04.229 { 00:27:04.229 "name": "COMP_lvs0/lv0", 00:27:04.229 "aliases": [ 00:27:04.229 "7484aa85-8ce3-5313-b318-522174a5a8db" 00:27:04.229 ], 00:27:04.229 "product_name": "compress", 00:27:04.229 "block_size": 512, 00:27:04.229 "num_blocks": 200704, 00:27:04.229 "uuid": "7484aa85-8ce3-5313-b318-522174a5a8db", 00:27:04.229 "assigned_rate_limits": { 00:27:04.229 "rw_ios_per_sec": 0, 00:27:04.229 "rw_mbytes_per_sec": 0, 00:27:04.229 "r_mbytes_per_sec": 0, 00:27:04.229 "w_mbytes_per_sec": 0 00:27:04.229 }, 00:27:04.229 "claimed": false, 00:27:04.229 "zoned": false, 00:27:04.229 "supported_io_types": { 00:27:04.229 "read": true, 00:27:04.229 "write": true, 00:27:04.229 "unmap": false, 00:27:04.229 "write_zeroes": true, 00:27:04.229 "flush": false, 00:27:04.229 "reset": false, 00:27:04.229 "compare": false, 00:27:04.229 "compare_and_write": false, 00:27:04.229 "abort": false, 00:27:04.229 "nvme_admin": false, 00:27:04.229 "nvme_io": false 00:27:04.229 }, 00:27:04.229 "driver_specific": { 00:27:04.229 "compress": { 00:27:04.229 "name": "COMP_lvs0/lv0", 00:27:04.229 "base_bdev_name": "e3bc7eb8-7004-474d-8f33-a9aafff5ef23" 00:27:04.229 } 00:27:04.229 } 00:27:04.229 } 00:27:04.229 ] 00:27:04.229 03:21:35 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:04.229 03:21:35 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:04.229 [2024-05-15 03:21:35.322936] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fef9c1b15a0 PMD being used: compress_qat 00:27:04.229 [2024-05-15 03:21:35.324946] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1be6970 PMD being used: compress_qat 00:27:04.229 Running I/O for 3 seconds... 00:27:07.513 00:27:07.513 Latency(us) 00:27:07.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.513 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:27:07.513 Verification LBA range: start 0x0 length 0x3100 00:27:07.513 COMP_lvs0/lv0 : 3.01 3912.79 15.28 0.00 0.00 8118.06 129.71 13793.77 00:27:07.513 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:27:07.513 Verification LBA range: start 0x3100 length 0x3100 00:27:07.513 COMP_lvs0/lv0 : 3.01 3986.79 15.57 0.00 0.00 7987.54 120.93 13981.01 00:27:07.513 =================================================================================================================== 00:27:07.513 Total : 7899.58 30.86 0.00 0.00 8052.21 120.93 13981.01 00:27:07.513 0 00:27:07.513 03:21:38 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:27:07.513 03:21:38 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:27:07.513 03:21:38 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:27:07.771 03:21:38 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:07.771 03:21:38 compress_compdev -- compress/compress.sh@78 -- # killprocess 35204 00:27:07.771 03:21:38 compress_compdev -- common/autotest_common.sh@946 -- # '[' -z 35204 ']' 00:27:07.771 03:21:38 compress_compdev -- common/autotest_common.sh@950 -- # kill -0 35204 00:27:07.771 03:21:38 compress_compdev -- common/autotest_common.sh@951 -- # uname 00:27:07.771 03:21:38 compress_compdev -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:07.771 03:21:38 compress_compdev -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 35204 00:27:08.029 03:21:38 compress_compdev -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:08.029 03:21:38 compress_compdev -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:08.029 03:21:38 compress_compdev -- common/autotest_common.sh@964 -- # echo 'killing process with pid 35204' 00:27:08.029 killing process with pid 35204 00:27:08.029 03:21:38 compress_compdev -- common/autotest_common.sh@965 -- # kill 35204 00:27:08.029 Received shutdown signal, test time was about 3.000000 seconds 00:27:08.029 00:27:08.029 Latency(us) 00:27:08.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.029 =================================================================================================================== 00:27:08.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.029 03:21:38 compress_compdev -- common/autotest_common.sh@970 -- # wait 35204 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=37258 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:27:09.405 03:21:40 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 37258 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@827 -- # '[' -z 37258 ']' 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:09.405 03:21:40 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:27:09.405 [2024-05-15 03:21:40.492274] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:09.405 [2024-05-15 03:21:40.492333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid37258 ] 00:27:09.664 [2024-05-15 03:21:40.584171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:09.664 [2024-05-15 03:21:40.679802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.664 [2024-05-15 03:21:40.679808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.230 [2024-05-15 03:21:41.233821] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:27:10.489 03:21:41 compress_compdev -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:10.489 03:21:41 compress_compdev -- common/autotest_common.sh@860 -- # return 0 00:27:10.489 03:21:41 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:27:10.489 03:21:41 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:10.489 03:21:41 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:13.771 [2024-05-15 03:21:44.538919] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x28b2740 PMD being used: compress_qat 00:27:13.771 03:21:44 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:13.771 03:21:44 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:27:14.031 [ 00:27:14.031 { 00:27:14.031 "name": "Nvme0n1", 00:27:14.031 "aliases": [ 00:27:14.031 "3b52bea4-d600-45f2-8270-227a9d986590" 00:27:14.031 ], 00:27:14.031 "product_name": "NVMe disk", 00:27:14.031 "block_size": 512, 00:27:14.031 "num_blocks": 1953525168, 00:27:14.031 "uuid": "3b52bea4-d600-45f2-8270-227a9d986590", 00:27:14.031 "assigned_rate_limits": { 00:27:14.031 "rw_ios_per_sec": 0, 00:27:14.031 "rw_mbytes_per_sec": 0, 00:27:14.031 "r_mbytes_per_sec": 0, 00:27:14.031 "w_mbytes_per_sec": 0 00:27:14.031 }, 00:27:14.031 "claimed": false, 00:27:14.031 "zoned": false, 00:27:14.031 "supported_io_types": { 00:27:14.031 "read": true, 00:27:14.031 "write": true, 00:27:14.031 "unmap": true, 00:27:14.031 "write_zeroes": true, 00:27:14.031 "flush": true, 00:27:14.031 "reset": true, 00:27:14.031 "compare": false, 00:27:14.031 "compare_and_write": false, 00:27:14.031 "abort": true, 00:27:14.031 "nvme_admin": true, 00:27:14.031 "nvme_io": true 00:27:14.031 }, 00:27:14.031 "driver_specific": { 00:27:14.031 "nvme": [ 00:27:14.031 { 00:27:14.031 "pci_address": "0000:5e:00.0", 00:27:14.031 "trid": { 00:27:14.031 "trtype": "PCIe", 00:27:14.031 "traddr": "0000:5e:00.0" 00:27:14.031 }, 00:27:14.031 "ctrlr_data": { 00:27:14.031 "cntlid": 0, 00:27:14.031 "vendor_id": "0x8086", 00:27:14.031 "model_number": "INTEL SSDPE2KX010T8", 00:27:14.031 "serial_number": "BTLJ807001JM1P0FGN", 00:27:14.031 "firmware_revision": "VDV10170", 00:27:14.031 "oacs": { 00:27:14.031 "security": 1, 00:27:14.031 "format": 1, 00:27:14.031 "firmware": 1, 00:27:14.031 "ns_manage": 1 00:27:14.031 }, 00:27:14.031 "multi_ctrlr": false, 00:27:14.031 "ana_reporting": false 00:27:14.031 }, 00:27:14.031 "vs": { 00:27:14.031 "nvme_version": "1.2" 00:27:14.031 }, 00:27:14.031 "ns_data": { 00:27:14.031 "id": 1, 00:27:14.031 "can_share": false 00:27:14.031 }, 00:27:14.031 "security": { 00:27:14.031 "opal": true 00:27:14.031 } 00:27:14.031 } 00:27:14.031 ], 00:27:14.031 "mp_policy": "active_passive" 00:27:14.031 } 00:27:14.031 } 00:27:14.031 ] 00:27:14.031 03:21:45 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:14.031 03:21:45 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:27:14.290 [2024-05-15 03:21:45.304713] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x27179f0 PMD being used: compress_qat 00:27:15.225 27190e01-4d66-4df8-bd1b-41feaa1933e7 00:27:15.225 03:21:46 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:27:15.484 e37dd201-e290-4cc4-ac8a-3ead4244c89f 00:27:15.484 03:21:46 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:15.484 03:21:46 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:15.742 03:21:46 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:27:16.001 [ 00:27:16.001 { 00:27:16.001 "name": "e37dd201-e290-4cc4-ac8a-3ead4244c89f", 00:27:16.001 "aliases": [ 00:27:16.001 "lvs0/lv0" 00:27:16.001 ], 00:27:16.001 "product_name": "Logical Volume", 00:27:16.001 "block_size": 512, 00:27:16.001 "num_blocks": 204800, 00:27:16.001 "uuid": "e37dd201-e290-4cc4-ac8a-3ead4244c89f", 00:27:16.001 "assigned_rate_limits": { 00:27:16.001 "rw_ios_per_sec": 0, 00:27:16.001 "rw_mbytes_per_sec": 0, 00:27:16.001 "r_mbytes_per_sec": 0, 00:27:16.001 "w_mbytes_per_sec": 0 00:27:16.001 }, 00:27:16.001 "claimed": false, 00:27:16.001 "zoned": false, 00:27:16.001 "supported_io_types": { 00:27:16.001 "read": true, 00:27:16.001 "write": true, 00:27:16.001 "unmap": true, 00:27:16.001 "write_zeroes": true, 00:27:16.001 "flush": false, 00:27:16.001 "reset": true, 00:27:16.001 "compare": false, 00:27:16.001 "compare_and_write": false, 00:27:16.001 "abort": false, 00:27:16.001 "nvme_admin": false, 00:27:16.001 "nvme_io": false 00:27:16.001 }, 00:27:16.001 "driver_specific": { 00:27:16.001 "lvol": { 00:27:16.001 "lvol_store_uuid": "27190e01-4d66-4df8-bd1b-41feaa1933e7", 00:27:16.001 "base_bdev": "Nvme0n1", 00:27:16.001 "thin_provision": true, 00:27:16.001 "num_allocated_clusters": 0, 00:27:16.001 "snapshot": false, 00:27:16.001 "clone": false, 00:27:16.001 "esnap_clone": false 00:27:16.001 } 00:27:16.001 } 00:27:16.001 } 00:27:16.001 ] 00:27:16.001 03:21:46 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:16.001 03:21:46 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:27:16.001 03:21:46 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:27:16.260 [2024-05-15 03:21:47.195828] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:27:16.260 COMP_lvs0/lv0 00:27:16.260 03:21:47 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:16.260 03:21:47 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:16.518 03:21:47 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:27:16.777 [ 00:27:16.777 { 00:27:16.777 "name": "COMP_lvs0/lv0", 00:27:16.777 "aliases": [ 00:27:16.777 "1bd35f84-0b57-5699-90c8-629e50e7164a" 00:27:16.777 ], 00:27:16.777 "product_name": "compress", 00:27:16.777 "block_size": 4096, 00:27:16.777 "num_blocks": 25088, 00:27:16.777 "uuid": "1bd35f84-0b57-5699-90c8-629e50e7164a", 00:27:16.777 "assigned_rate_limits": { 00:27:16.777 "rw_ios_per_sec": 0, 00:27:16.777 "rw_mbytes_per_sec": 0, 00:27:16.777 "r_mbytes_per_sec": 0, 00:27:16.777 "w_mbytes_per_sec": 0 00:27:16.777 }, 00:27:16.777 "claimed": false, 00:27:16.777 "zoned": false, 00:27:16.777 "supported_io_types": { 00:27:16.777 "read": true, 00:27:16.777 "write": true, 00:27:16.777 "unmap": false, 00:27:16.777 "write_zeroes": true, 00:27:16.777 "flush": false, 00:27:16.777 "reset": false, 00:27:16.777 "compare": false, 00:27:16.777 "compare_and_write": false, 00:27:16.777 "abort": false, 00:27:16.777 "nvme_admin": false, 00:27:16.777 "nvme_io": false 00:27:16.777 }, 00:27:16.777 "driver_specific": { 00:27:16.777 "compress": { 00:27:16.777 "name": "COMP_lvs0/lv0", 00:27:16.777 "base_bdev_name": "e37dd201-e290-4cc4-ac8a-3ead4244c89f" 00:27:16.777 } 00:27:16.777 } 00:27:16.777 } 00:27:16.777 ] 00:27:16.777 03:21:47 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:16.777 03:21:47 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:16.777 [2024-05-15 03:21:47.838114] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f231c1b15a0 PMD being used: compress_qat 00:27:16.777 [2024-05-15 03:21:47.840079] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x28a7b50 PMD being used: compress_qat 00:27:16.777 Running I/O for 3 seconds... 00:27:20.137 00:27:20.137 Latency(us) 00:27:20.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.138 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:27:20.138 Verification LBA range: start 0x0 length 0x3100 00:27:20.138 COMP_lvs0/lv0 : 3.01 3874.03 15.13 0.00 0.00 8199.02 176.52 13793.77 00:27:20.138 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:27:20.138 Verification LBA range: start 0x3100 length 0x3100 00:27:20.138 COMP_lvs0/lv0 : 3.01 3980.07 15.55 0.00 0.00 7996.70 164.82 13918.60 00:27:20.138 =================================================================================================================== 00:27:20.138 Total : 7854.10 30.68 0.00 0.00 8096.49 164.82 13918.60 00:27:20.138 0 00:27:20.138 03:21:50 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:27:20.138 03:21:50 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:27:20.138 03:21:51 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:27:20.397 03:21:51 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:20.397 03:21:51 compress_compdev -- compress/compress.sh@78 -- # killprocess 37258 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@946 -- # '[' -z 37258 ']' 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@950 -- # kill -0 37258 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@951 -- # uname 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 37258 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@964 -- # echo 'killing process with pid 37258' 00:27:20.397 killing process with pid 37258 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@965 -- # kill 37258 00:27:20.397 Received shutdown signal, test time was about 3.000000 seconds 00:27:20.397 00:27:20.397 Latency(us) 00:27:20.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.397 =================================================================================================================== 00:27:20.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.397 03:21:51 compress_compdev -- common/autotest_common.sh@970 -- # wait 37258 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=39202 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:27:22.298 03:21:52 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 39202 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@827 -- # '[' -z 39202 ']' 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:22.298 03:21:52 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:27:22.298 [2024-05-15 03:21:52.991415] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:22.298 [2024-05-15 03:21:52.991473] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39202 ] 00:27:22.298 [2024-05-15 03:21:53.088932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.298 [2024-05-15 03:21:53.185411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.298 [2024-05-15 03:21:53.185512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.298 [2024-05-15 03:21:53.185516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.863 [2024-05-15 03:21:53.752925] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:27:22.863 03:21:53 compress_compdev -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.863 03:21:53 compress_compdev -- common/autotest_common.sh@860 -- # return 0 00:27:22.863 03:21:53 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:27:22.863 03:21:53 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:22.863 03:21:53 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:26.144 [2024-05-15 03:21:57.043405] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22041b0 PMD being used: compress_qat 00:27:26.144 03:21:57 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:26.144 03:21:57 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:26.402 03:21:57 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:27:26.402 [ 00:27:26.402 { 00:27:26.402 "name": "Nvme0n1", 00:27:26.402 "aliases": [ 00:27:26.402 "a247532b-fe18-4a8b-8593-3f2329c4edc0" 00:27:26.402 ], 00:27:26.402 "product_name": "NVMe disk", 00:27:26.402 "block_size": 512, 00:27:26.402 "num_blocks": 1953525168, 00:27:26.402 "uuid": "a247532b-fe18-4a8b-8593-3f2329c4edc0", 00:27:26.402 "assigned_rate_limits": { 00:27:26.402 "rw_ios_per_sec": 0, 00:27:26.402 "rw_mbytes_per_sec": 0, 00:27:26.402 "r_mbytes_per_sec": 0, 00:27:26.402 "w_mbytes_per_sec": 0 00:27:26.402 }, 00:27:26.402 "claimed": false, 00:27:26.402 "zoned": false, 00:27:26.402 "supported_io_types": { 00:27:26.402 "read": true, 00:27:26.402 "write": true, 00:27:26.402 "unmap": true, 00:27:26.402 "write_zeroes": true, 00:27:26.402 "flush": true, 00:27:26.402 "reset": true, 00:27:26.402 "compare": false, 00:27:26.402 "compare_and_write": false, 00:27:26.402 "abort": true, 00:27:26.402 "nvme_admin": true, 00:27:26.402 "nvme_io": true 00:27:26.402 }, 00:27:26.402 "driver_specific": { 00:27:26.402 "nvme": [ 00:27:26.402 { 00:27:26.402 "pci_address": "0000:5e:00.0", 00:27:26.402 "trid": { 00:27:26.402 "trtype": "PCIe", 00:27:26.402 "traddr": "0000:5e:00.0" 00:27:26.402 }, 00:27:26.402 "ctrlr_data": { 00:27:26.402 "cntlid": 0, 00:27:26.402 "vendor_id": "0x8086", 00:27:26.402 "model_number": "INTEL SSDPE2KX010T8", 00:27:26.402 "serial_number": "BTLJ807001JM1P0FGN", 00:27:26.402 "firmware_revision": "VDV10170", 00:27:26.402 "oacs": { 00:27:26.403 "security": 1, 00:27:26.403 "format": 1, 00:27:26.403 "firmware": 1, 00:27:26.403 "ns_manage": 1 00:27:26.403 }, 00:27:26.403 "multi_ctrlr": false, 00:27:26.403 "ana_reporting": false 00:27:26.403 }, 00:27:26.403 "vs": { 00:27:26.403 "nvme_version": "1.2" 00:27:26.403 }, 00:27:26.403 "ns_data": { 00:27:26.403 "id": 1, 00:27:26.403 "can_share": false 00:27:26.403 }, 00:27:26.403 "security": { 00:27:26.403 "opal": true 00:27:26.403 } 00:27:26.403 } 00:27:26.403 ], 00:27:26.403 "mp_policy": "active_passive" 00:27:26.403 } 00:27:26.403 } 00:27:26.403 ] 00:27:26.660 03:21:57 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:26.660 03:21:57 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:27:26.660 [2024-05-15 03:21:57.797821] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x203c010 PMD being used: compress_qat 00:27:27.593 9685892c-b5c3-42b2-a30c-a2342d41aa65 00:27:27.593 03:21:58 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:27:27.895 cb865ea9-16a5-4cae-b588-385cb2d6fc24 00:27:27.895 03:21:58 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:27.895 03:21:58 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:28.154 03:21:59 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:27:28.411 [ 00:27:28.411 { 00:27:28.411 "name": "cb865ea9-16a5-4cae-b588-385cb2d6fc24", 00:27:28.411 "aliases": [ 00:27:28.411 "lvs0/lv0" 00:27:28.411 ], 00:27:28.411 "product_name": "Logical Volume", 00:27:28.411 "block_size": 512, 00:27:28.411 "num_blocks": 204800, 00:27:28.411 "uuid": "cb865ea9-16a5-4cae-b588-385cb2d6fc24", 00:27:28.411 "assigned_rate_limits": { 00:27:28.411 "rw_ios_per_sec": 0, 00:27:28.411 "rw_mbytes_per_sec": 0, 00:27:28.411 "r_mbytes_per_sec": 0, 00:27:28.411 "w_mbytes_per_sec": 0 00:27:28.411 }, 00:27:28.411 "claimed": false, 00:27:28.411 "zoned": false, 00:27:28.411 "supported_io_types": { 00:27:28.411 "read": true, 00:27:28.411 "write": true, 00:27:28.411 "unmap": true, 00:27:28.411 "write_zeroes": true, 00:27:28.411 "flush": false, 00:27:28.411 "reset": true, 00:27:28.411 "compare": false, 00:27:28.411 "compare_and_write": false, 00:27:28.411 "abort": false, 00:27:28.411 "nvme_admin": false, 00:27:28.411 "nvme_io": false 00:27:28.411 }, 00:27:28.411 "driver_specific": { 00:27:28.411 "lvol": { 00:27:28.411 "lvol_store_uuid": "9685892c-b5c3-42b2-a30c-a2342d41aa65", 00:27:28.411 "base_bdev": "Nvme0n1", 00:27:28.411 "thin_provision": true, 00:27:28.411 "num_allocated_clusters": 0, 00:27:28.411 "snapshot": false, 00:27:28.411 "clone": false, 00:27:28.411 "esnap_clone": false 00:27:28.411 } 00:27:28.411 } 00:27:28.411 } 00:27:28.411 ] 00:27:28.411 03:21:59 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:28.411 03:21:59 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:27:28.412 03:21:59 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:27:28.668 [2024-05-15 03:21:59.687278] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:27:28.668 COMP_lvs0/lv0 00:27:28.668 03:21:59 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:27:28.668 03:21:59 compress_compdev -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:27:28.668 03:21:59 compress_compdev -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:28.668 03:21:59 compress_compdev -- common/autotest_common.sh@897 -- # local i 00:27:28.669 03:21:59 compress_compdev -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:28.669 03:21:59 compress_compdev -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:28.669 03:21:59 compress_compdev -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:28.925 03:21:59 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:27:29.182 [ 00:27:29.182 { 00:27:29.182 "name": "COMP_lvs0/lv0", 00:27:29.182 "aliases": [ 00:27:29.182 "3acc820a-b8e8-5fed-bea5-6ca057ff847d" 00:27:29.182 ], 00:27:29.182 "product_name": "compress", 00:27:29.182 "block_size": 512, 00:27:29.182 "num_blocks": 200704, 00:27:29.182 "uuid": "3acc820a-b8e8-5fed-bea5-6ca057ff847d", 00:27:29.182 "assigned_rate_limits": { 00:27:29.182 "rw_ios_per_sec": 0, 00:27:29.182 "rw_mbytes_per_sec": 0, 00:27:29.182 "r_mbytes_per_sec": 0, 00:27:29.182 "w_mbytes_per_sec": 0 00:27:29.182 }, 00:27:29.182 "claimed": false, 00:27:29.182 "zoned": false, 00:27:29.182 "supported_io_types": { 00:27:29.182 "read": true, 00:27:29.182 "write": true, 00:27:29.182 "unmap": false, 00:27:29.182 "write_zeroes": true, 00:27:29.182 "flush": false, 00:27:29.182 "reset": false, 00:27:29.182 "compare": false, 00:27:29.182 "compare_and_write": false, 00:27:29.182 "abort": false, 00:27:29.182 "nvme_admin": false, 00:27:29.182 "nvme_io": false 00:27:29.182 }, 00:27:29.182 "driver_specific": { 00:27:29.182 "compress": { 00:27:29.182 "name": "COMP_lvs0/lv0", 00:27:29.182 "base_bdev_name": "cb865ea9-16a5-4cae-b588-385cb2d6fc24" 00:27:29.182 } 00:27:29.182 } 00:27:29.182 } 00:27:29.182 ] 00:27:29.182 03:22:00 compress_compdev -- common/autotest_common.sh@903 -- # return 0 00:27:29.182 03:22:00 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:29.439 [2024-05-15 03:22:00.344442] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f65341b1330 PMD being used: compress_qat 00:27:29.439 I/O targets: 00:27:29.439 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:27:29.439 00:27:29.439 00:27:29.439 CUnit - A unit testing framework for C - Version 2.1-3 00:27:29.439 http://cunit.sourceforge.net/ 00:27:29.439 00:27:29.439 00:27:29.439 Suite: bdevio tests on: COMP_lvs0/lv0 00:27:29.439 Test: blockdev write read block ...passed 00:27:29.439 Test: blockdev write zeroes read block ...passed 00:27:29.439 Test: blockdev write zeroes read no split ...passed 00:27:29.439 Test: blockdev write zeroes read split ...passed 00:27:29.439 Test: blockdev write zeroes read split partial ...passed 00:27:29.439 Test: blockdev reset ...[2024-05-15 03:22:00.396971] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:27:29.439 passed 00:27:29.439 Test: blockdev write read 8 blocks ...passed 00:27:29.439 Test: blockdev write read size > 128k ...passed 00:27:29.439 Test: blockdev write read invalid size ...passed 00:27:29.439 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:29.439 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:29.439 Test: blockdev write read max offset ...passed 00:27:29.439 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:29.439 Test: blockdev writev readv 8 blocks ...passed 00:27:29.439 Test: blockdev writev readv 30 x 1block ...passed 00:27:29.439 Test: blockdev writev readv block ...passed 00:27:29.440 Test: blockdev writev readv size > 128k ...passed 00:27:29.440 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:29.440 Test: blockdev comparev and writev ...passed 00:27:29.440 Test: blockdev nvme passthru rw ...passed 00:27:29.440 Test: blockdev nvme passthru vendor specific ...passed 00:27:29.440 Test: blockdev nvme admin passthru ...passed 00:27:29.440 Test: blockdev copy ...passed 00:27:29.440 00:27:29.440 Run Summary: Type Total Ran Passed Failed Inactive 00:27:29.440 suites 1 1 n/a 0 0 00:27:29.440 tests 23 23 23 0 0 00:27:29.440 asserts 130 130 130 0 n/a 00:27:29.440 00:27:29.440 Elapsed time = 0.159 seconds 00:27:29.440 0 00:27:29.440 03:22:00 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:27:29.440 03:22:00 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:27:29.702 03:22:00 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:27:29.959 03:22:00 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:27:29.959 03:22:00 compress_compdev -- compress/compress.sh@62 -- # killprocess 39202 00:27:29.959 03:22:00 compress_compdev -- common/autotest_common.sh@946 -- # '[' -z 39202 ']' 00:27:29.959 03:22:00 compress_compdev -- common/autotest_common.sh@950 -- # kill -0 39202 00:27:29.959 03:22:00 compress_compdev -- common/autotest_common.sh@951 -- # uname 00:27:29.959 03:22:00 compress_compdev -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.959 03:22:00 compress_compdev -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 39202 00:27:29.959 03:22:01 compress_compdev -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:29.959 03:22:01 compress_compdev -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:29.959 03:22:01 compress_compdev -- common/autotest_common.sh@964 -- # echo 'killing process with pid 39202' 00:27:29.959 killing process with pid 39202 00:27:29.959 03:22:01 compress_compdev -- common/autotest_common.sh@965 -- # kill 39202 00:27:29.959 03:22:01 compress_compdev -- common/autotest_common.sh@970 -- # wait 39202 00:27:31.859 03:22:02 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:27:31.859 03:22:02 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:27:31.859 00:27:31.859 real 0m47.274s 00:27:31.859 user 1m49.337s 00:27:31.859 sys 0m4.456s 00:27:31.859 03:22:02 compress_compdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:31.859 03:22:02 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 ************************************ 00:27:31.859 END TEST compress_compdev 00:27:31.859 ************************************ 00:27:31.859 03:22:02 -- spdk/autotest.sh@345 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:27:31.859 03:22:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:31.859 03:22:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:31.859 03:22:02 -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 ************************************ 00:27:31.859 START TEST compress_isal 00:27:31.859 ************************************ 00:27:31.859 03:22:02 compress_isal -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:27:31.859 * Looking for test storage... 00:27:31.859 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:27:31.859 03:22:02 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.859 03:22:02 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.859 03:22:02 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.859 03:22:02 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 03:22:02 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 03:22:02 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 03:22:02 compress_isal -- paths/export.sh@5 -- # export PATH 00:27:31.859 03:22:02 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@47 -- # : 0 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.859 03:22:02 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=40799 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@73 -- # waitforlisten 40799 00:27:31.859 03:22:02 compress_isal -- common/autotest_common.sh@827 -- # '[' -z 40799 ']' 00:27:31.859 03:22:02 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:27:31.859 03:22:02 compress_isal -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.859 03:22:02 compress_isal -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.859 03:22:02 compress_isal -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.860 03:22:02 compress_isal -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.860 03:22:02 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:27:31.860 [2024-05-15 03:22:02.850439] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:31.860 [2024-05-15 03:22:02.850497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40799 ] 00:27:31.860 [2024-05-15 03:22:02.944520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:32.118 [2024-05-15 03:22:03.041458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.118 [2024-05-15 03:22:03.041464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.683 03:22:03 compress_isal -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.683 03:22:03 compress_isal -- common/autotest_common.sh@860 -- # return 0 00:27:32.683 03:22:03 compress_isal -- compress/compress.sh@74 -- # create_vols 00:27:32.684 03:22:03 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:32.684 03:22:03 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:35.961 03:22:06 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:35.961 03:22:06 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:36.220 03:22:07 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:27:36.478 [ 00:27:36.478 { 00:27:36.478 "name": "Nvme0n1", 00:27:36.478 "aliases": [ 00:27:36.478 "d4712b82-f4e9-48f6-9e6e-0a04f809c628" 00:27:36.478 ], 00:27:36.478 "product_name": "NVMe disk", 00:27:36.478 "block_size": 512, 00:27:36.478 "num_blocks": 1953525168, 00:27:36.478 "uuid": "d4712b82-f4e9-48f6-9e6e-0a04f809c628", 00:27:36.478 "assigned_rate_limits": { 00:27:36.478 "rw_ios_per_sec": 0, 00:27:36.478 "rw_mbytes_per_sec": 0, 00:27:36.478 "r_mbytes_per_sec": 0, 00:27:36.478 "w_mbytes_per_sec": 0 00:27:36.478 }, 00:27:36.478 "claimed": false, 00:27:36.478 "zoned": false, 00:27:36.478 "supported_io_types": { 00:27:36.478 "read": true, 00:27:36.478 "write": true, 00:27:36.478 "unmap": true, 00:27:36.478 "write_zeroes": true, 00:27:36.478 "flush": true, 00:27:36.478 "reset": true, 00:27:36.478 "compare": false, 00:27:36.478 "compare_and_write": false, 00:27:36.478 "abort": true, 00:27:36.478 "nvme_admin": true, 00:27:36.478 "nvme_io": true 00:27:36.478 }, 00:27:36.478 "driver_specific": { 00:27:36.478 "nvme": [ 00:27:36.478 { 00:27:36.478 "pci_address": "0000:5e:00.0", 00:27:36.478 "trid": { 00:27:36.478 "trtype": "PCIe", 00:27:36.478 "traddr": "0000:5e:00.0" 00:27:36.478 }, 00:27:36.478 "ctrlr_data": { 00:27:36.478 "cntlid": 0, 00:27:36.478 "vendor_id": "0x8086", 00:27:36.478 "model_number": "INTEL SSDPE2KX010T8", 00:27:36.478 "serial_number": "BTLJ807001JM1P0FGN", 00:27:36.478 "firmware_revision": "VDV10170", 00:27:36.478 "oacs": { 00:27:36.478 "security": 1, 00:27:36.478 "format": 1, 00:27:36.478 "firmware": 1, 00:27:36.478 "ns_manage": 1 00:27:36.478 }, 00:27:36.478 "multi_ctrlr": false, 00:27:36.478 "ana_reporting": false 00:27:36.478 }, 00:27:36.478 "vs": { 00:27:36.478 "nvme_version": "1.2" 00:27:36.478 }, 00:27:36.478 "ns_data": { 00:27:36.478 "id": 1, 00:27:36.478 "can_share": false 00:27:36.478 }, 00:27:36.478 "security": { 00:27:36.478 "opal": true 00:27:36.478 } 00:27:36.478 } 00:27:36.478 ], 00:27:36.478 "mp_policy": "active_passive" 00:27:36.478 } 00:27:36.478 } 00:27:36.478 ] 00:27:36.478 03:22:07 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:36.478 03:22:07 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:27:37.852 d64fcca3-e999-42fb-a835-70499ac16e5d 00:27:37.852 03:22:08 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:27:37.852 1b800ed0-3b4b-4578-89f9-52746ac59c4c 00:27:37.852 03:22:08 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:37.852 03:22:08 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:38.110 03:22:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:27:38.368 [ 00:27:38.368 { 00:27:38.368 "name": "1b800ed0-3b4b-4578-89f9-52746ac59c4c", 00:27:38.368 "aliases": [ 00:27:38.368 "lvs0/lv0" 00:27:38.368 ], 00:27:38.368 "product_name": "Logical Volume", 00:27:38.368 "block_size": 512, 00:27:38.368 "num_blocks": 204800, 00:27:38.368 "uuid": "1b800ed0-3b4b-4578-89f9-52746ac59c4c", 00:27:38.368 "assigned_rate_limits": { 00:27:38.368 "rw_ios_per_sec": 0, 00:27:38.368 "rw_mbytes_per_sec": 0, 00:27:38.368 "r_mbytes_per_sec": 0, 00:27:38.368 "w_mbytes_per_sec": 0 00:27:38.368 }, 00:27:38.368 "claimed": false, 00:27:38.368 "zoned": false, 00:27:38.368 "supported_io_types": { 00:27:38.368 "read": true, 00:27:38.368 "write": true, 00:27:38.368 "unmap": true, 00:27:38.368 "write_zeroes": true, 00:27:38.368 "flush": false, 00:27:38.368 "reset": true, 00:27:38.368 "compare": false, 00:27:38.368 "compare_and_write": false, 00:27:38.368 "abort": false, 00:27:38.368 "nvme_admin": false, 00:27:38.368 "nvme_io": false 00:27:38.368 }, 00:27:38.368 "driver_specific": { 00:27:38.368 "lvol": { 00:27:38.368 "lvol_store_uuid": "d64fcca3-e999-42fb-a835-70499ac16e5d", 00:27:38.368 "base_bdev": "Nvme0n1", 00:27:38.368 "thin_provision": true, 00:27:38.368 "num_allocated_clusters": 0, 00:27:38.368 "snapshot": false, 00:27:38.368 "clone": false, 00:27:38.368 "esnap_clone": false 00:27:38.368 } 00:27:38.368 } 00:27:38.368 } 00:27:38.368 ] 00:27:38.368 03:22:09 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:38.368 03:22:09 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:27:38.368 03:22:09 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:27:38.627 [2024-05-15 03:22:09.612109] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:27:38.627 COMP_lvs0/lv0 00:27:38.627 03:22:09 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:38.627 03:22:09 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:38.886 03:22:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:27:39.145 [ 00:27:39.145 { 00:27:39.145 "name": "COMP_lvs0/lv0", 00:27:39.145 "aliases": [ 00:27:39.145 "3245b7ef-880e-5778-ab16-ec1c59c02461" 00:27:39.145 ], 00:27:39.145 "product_name": "compress", 00:27:39.145 "block_size": 512, 00:27:39.145 "num_blocks": 200704, 00:27:39.145 "uuid": "3245b7ef-880e-5778-ab16-ec1c59c02461", 00:27:39.145 "assigned_rate_limits": { 00:27:39.145 "rw_ios_per_sec": 0, 00:27:39.145 "rw_mbytes_per_sec": 0, 00:27:39.145 "r_mbytes_per_sec": 0, 00:27:39.145 "w_mbytes_per_sec": 0 00:27:39.145 }, 00:27:39.145 "claimed": false, 00:27:39.145 "zoned": false, 00:27:39.145 "supported_io_types": { 00:27:39.145 "read": true, 00:27:39.145 "write": true, 00:27:39.145 "unmap": false, 00:27:39.145 "write_zeroes": true, 00:27:39.145 "flush": false, 00:27:39.145 "reset": false, 00:27:39.145 "compare": false, 00:27:39.145 "compare_and_write": false, 00:27:39.145 "abort": false, 00:27:39.145 "nvme_admin": false, 00:27:39.145 "nvme_io": false 00:27:39.145 }, 00:27:39.145 "driver_specific": { 00:27:39.145 "compress": { 00:27:39.145 "name": "COMP_lvs0/lv0", 00:27:39.145 "base_bdev_name": "1b800ed0-3b4b-4578-89f9-52746ac59c4c" 00:27:39.145 } 00:27:39.145 } 00:27:39.145 } 00:27:39.145 ] 00:27:39.145 03:22:10 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:39.145 03:22:10 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:39.145 Running I/O for 3 seconds... 00:27:42.494 00:27:42.494 Latency(us) 00:27:42.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.494 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:27:42.494 Verification LBA range: start 0x0 length 0x3100 00:27:42.494 COMP_lvs0/lv0 : 3.01 3308.36 12.92 0.00 0.00 9601.44 59.00 14043.43 00:27:42.494 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:27:42.494 Verification LBA range: start 0x3100 length 0x3100 00:27:42.494 COMP_lvs0/lv0 : 3.01 3323.36 12.98 0.00 0.00 9575.14 56.32 14417.92 00:27:42.494 =================================================================================================================== 00:27:42.494 Total : 6631.72 25.91 0.00 0.00 9588.26 56.32 14417.92 00:27:42.494 0 00:27:42.494 03:22:13 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:27:42.494 03:22:13 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:27:42.494 03:22:13 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:27:42.752 03:22:13 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:42.752 03:22:13 compress_isal -- compress/compress.sh@78 -- # killprocess 40799 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@946 -- # '[' -z 40799 ']' 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@950 -- # kill -0 40799 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@951 -- # uname 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 40799 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:42.752 03:22:13 compress_isal -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:42.753 03:22:13 compress_isal -- common/autotest_common.sh@964 -- # echo 'killing process with pid 40799' 00:27:42.753 killing process with pid 40799 00:27:42.753 03:22:13 compress_isal -- common/autotest_common.sh@965 -- # kill 40799 00:27:42.753 Received shutdown signal, test time was about 3.000000 seconds 00:27:42.753 00:27:42.753 Latency(us) 00:27:42.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.753 =================================================================================================================== 00:27:42.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.753 03:22:13 compress_isal -- common/autotest_common.sh@970 -- # wait 40799 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=42824 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:27:44.654 03:22:15 compress_isal -- compress/compress.sh@73 -- # waitforlisten 42824 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@827 -- # '[' -z 42824 ']' 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:44.654 03:22:15 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:27:44.654 [2024-05-15 03:22:15.467171] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:44.654 [2024-05-15 03:22:15.467231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42824 ] 00:27:44.654 [2024-05-15 03:22:15.560907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:44.654 [2024-05-15 03:22:15.650063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.654 [2024-05-15 03:22:15.650070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.590 03:22:16 compress_isal -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:45.590 03:22:16 compress_isal -- common/autotest_common.sh@860 -- # return 0 00:27:45.590 03:22:16 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:27:45.590 03:22:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:45.590 03:22:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:48.871 03:22:19 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:48.871 03:22:19 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:27:49.129 [ 00:27:49.129 { 00:27:49.129 "name": "Nvme0n1", 00:27:49.129 "aliases": [ 00:27:49.129 "6e1702ab-9d00-40da-ab95-bcaea685e4ea" 00:27:49.129 ], 00:27:49.129 "product_name": "NVMe disk", 00:27:49.129 "block_size": 512, 00:27:49.129 "num_blocks": 1953525168, 00:27:49.129 "uuid": "6e1702ab-9d00-40da-ab95-bcaea685e4ea", 00:27:49.129 "assigned_rate_limits": { 00:27:49.129 "rw_ios_per_sec": 0, 00:27:49.129 "rw_mbytes_per_sec": 0, 00:27:49.129 "r_mbytes_per_sec": 0, 00:27:49.129 "w_mbytes_per_sec": 0 00:27:49.129 }, 00:27:49.129 "claimed": false, 00:27:49.129 "zoned": false, 00:27:49.129 "supported_io_types": { 00:27:49.129 "read": true, 00:27:49.129 "write": true, 00:27:49.129 "unmap": true, 00:27:49.129 "write_zeroes": true, 00:27:49.129 "flush": true, 00:27:49.129 "reset": true, 00:27:49.129 "compare": false, 00:27:49.129 "compare_and_write": false, 00:27:49.129 "abort": true, 00:27:49.129 "nvme_admin": true, 00:27:49.129 "nvme_io": true 00:27:49.129 }, 00:27:49.129 "driver_specific": { 00:27:49.129 "nvme": [ 00:27:49.129 { 00:27:49.129 "pci_address": "0000:5e:00.0", 00:27:49.129 "trid": { 00:27:49.129 "trtype": "PCIe", 00:27:49.129 "traddr": "0000:5e:00.0" 00:27:49.129 }, 00:27:49.129 "ctrlr_data": { 00:27:49.129 "cntlid": 0, 00:27:49.129 "vendor_id": "0x8086", 00:27:49.129 "model_number": "INTEL SSDPE2KX010T8", 00:27:49.129 "serial_number": "BTLJ807001JM1P0FGN", 00:27:49.129 "firmware_revision": "VDV10170", 00:27:49.129 "oacs": { 00:27:49.129 "security": 1, 00:27:49.129 "format": 1, 00:27:49.129 "firmware": 1, 00:27:49.129 "ns_manage": 1 00:27:49.129 }, 00:27:49.129 "multi_ctrlr": false, 00:27:49.129 "ana_reporting": false 00:27:49.129 }, 00:27:49.129 "vs": { 00:27:49.129 "nvme_version": "1.2" 00:27:49.129 }, 00:27:49.129 "ns_data": { 00:27:49.129 "id": 1, 00:27:49.129 "can_share": false 00:27:49.129 }, 00:27:49.129 "security": { 00:27:49.129 "opal": true 00:27:49.129 } 00:27:49.129 } 00:27:49.129 ], 00:27:49.129 "mp_policy": "active_passive" 00:27:49.129 } 00:27:49.129 } 00:27:49.129 ] 00:27:49.129 03:22:20 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:49.129 03:22:20 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:27:50.062 3dca85d0-a677-4142-854d-955c889c63f3 00:27:50.062 03:22:21 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:27:50.320 65ac07dc-978a-421c-9d17-7ecfa478f1cc 00:27:50.320 03:22:21 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:50.320 03:22:21 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:50.579 03:22:21 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:27:50.837 [ 00:27:50.837 { 00:27:50.837 "name": "65ac07dc-978a-421c-9d17-7ecfa478f1cc", 00:27:50.837 "aliases": [ 00:27:50.837 "lvs0/lv0" 00:27:50.837 ], 00:27:50.837 "product_name": "Logical Volume", 00:27:50.837 "block_size": 512, 00:27:50.837 "num_blocks": 204800, 00:27:50.837 "uuid": "65ac07dc-978a-421c-9d17-7ecfa478f1cc", 00:27:50.837 "assigned_rate_limits": { 00:27:50.837 "rw_ios_per_sec": 0, 00:27:50.837 "rw_mbytes_per_sec": 0, 00:27:50.837 "r_mbytes_per_sec": 0, 00:27:50.837 "w_mbytes_per_sec": 0 00:27:50.837 }, 00:27:50.837 "claimed": false, 00:27:50.837 "zoned": false, 00:27:50.837 "supported_io_types": { 00:27:50.837 "read": true, 00:27:50.837 "write": true, 00:27:50.837 "unmap": true, 00:27:50.837 "write_zeroes": true, 00:27:50.837 "flush": false, 00:27:50.837 "reset": true, 00:27:50.837 "compare": false, 00:27:50.837 "compare_and_write": false, 00:27:50.837 "abort": false, 00:27:50.837 "nvme_admin": false, 00:27:50.837 "nvme_io": false 00:27:50.837 }, 00:27:50.837 "driver_specific": { 00:27:50.837 "lvol": { 00:27:50.837 "lvol_store_uuid": "3dca85d0-a677-4142-854d-955c889c63f3", 00:27:50.837 "base_bdev": "Nvme0n1", 00:27:50.837 "thin_provision": true, 00:27:50.837 "num_allocated_clusters": 0, 00:27:50.837 "snapshot": false, 00:27:50.837 "clone": false, 00:27:50.837 "esnap_clone": false 00:27:50.837 } 00:27:50.837 } 00:27:50.837 } 00:27:50.837 ] 00:27:50.837 03:22:21 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:50.837 03:22:21 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:27:50.837 03:22:21 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:27:51.095 [2024-05-15 03:22:22.141860] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:27:51.095 COMP_lvs0/lv0 00:27:51.095 03:22:22 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:27:51.095 03:22:22 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:27:51.096 03:22:22 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:51.096 03:22:22 compress_isal -- common/autotest_common.sh@897 -- # local i 00:27:51.096 03:22:22 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:51.096 03:22:22 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:51.096 03:22:22 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:51.354 03:22:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:27:51.612 [ 00:27:51.612 { 00:27:51.612 "name": "COMP_lvs0/lv0", 00:27:51.612 "aliases": [ 00:27:51.612 "851949d4-c3ee-55a1-b519-46bce3d57a50" 00:27:51.612 ], 00:27:51.612 "product_name": "compress", 00:27:51.612 "block_size": 512, 00:27:51.612 "num_blocks": 200704, 00:27:51.612 "uuid": "851949d4-c3ee-55a1-b519-46bce3d57a50", 00:27:51.612 "assigned_rate_limits": { 00:27:51.612 "rw_ios_per_sec": 0, 00:27:51.612 "rw_mbytes_per_sec": 0, 00:27:51.612 "r_mbytes_per_sec": 0, 00:27:51.612 "w_mbytes_per_sec": 0 00:27:51.612 }, 00:27:51.612 "claimed": false, 00:27:51.612 "zoned": false, 00:27:51.612 "supported_io_types": { 00:27:51.612 "read": true, 00:27:51.612 "write": true, 00:27:51.612 "unmap": false, 00:27:51.612 "write_zeroes": true, 00:27:51.612 "flush": false, 00:27:51.612 "reset": false, 00:27:51.612 "compare": false, 00:27:51.612 "compare_and_write": false, 00:27:51.612 "abort": false, 00:27:51.612 "nvme_admin": false, 00:27:51.612 "nvme_io": false 00:27:51.612 }, 00:27:51.612 "driver_specific": { 00:27:51.612 "compress": { 00:27:51.612 "name": "COMP_lvs0/lv0", 00:27:51.612 "base_bdev_name": "65ac07dc-978a-421c-9d17-7ecfa478f1cc" 00:27:51.612 } 00:27:51.612 } 00:27:51.612 } 00:27:51.612 ] 00:27:51.612 03:22:22 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:27:51.612 03:22:22 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:51.871 Running I/O for 3 seconds... 00:27:55.157 00:27:55.157 Latency(us) 00:27:55.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.157 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:27:55.157 Verification LBA range: start 0x0 length 0x3100 00:27:55.157 COMP_lvs0/lv0 : 3.01 3347.40 13.08 0.00 0.00 9493.97 58.76 14667.58 00:27:55.157 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:27:55.157 Verification LBA range: start 0x3100 length 0x3100 00:27:55.157 COMP_lvs0/lv0 : 3.01 3362.06 13.13 0.00 0.00 9473.75 56.08 15354.15 00:27:55.157 =================================================================================================================== 00:27:55.157 Total : 6709.46 26.21 0.00 0.00 9483.84 56.08 15354.15 00:27:55.157 0 00:27:55.158 03:22:25 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:27:55.158 03:22:25 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:27:55.158 03:22:26 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:27:55.416 03:22:26 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:55.416 03:22:26 compress_isal -- compress/compress.sh@78 -- # killprocess 42824 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@946 -- # '[' -z 42824 ']' 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@950 -- # kill -0 42824 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@951 -- # uname 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 42824 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@964 -- # echo 'killing process with pid 42824' 00:27:55.416 killing process with pid 42824 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@965 -- # kill 42824 00:27:55.416 Received shutdown signal, test time was about 3.000000 seconds 00:27:55.416 00:27:55.416 Latency(us) 00:27:55.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.416 =================================================================================================================== 00:27:55.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.416 03:22:26 compress_isal -- common/autotest_common.sh@970 -- # wait 42824 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=44881 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:27:56.789 03:22:27 compress_isal -- compress/compress.sh@73 -- # waitforlisten 44881 00:27:56.789 03:22:27 compress_isal -- common/autotest_common.sh@827 -- # '[' -z 44881 ']' 00:27:56.789 03:22:27 compress_isal -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.789 03:22:27 compress_isal -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:56.789 03:22:27 compress_isal -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.790 03:22:27 compress_isal -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:56.790 03:22:27 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:27:56.790 [2024-05-15 03:22:27.948628] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:56.790 [2024-05-15 03:22:27.948687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44881 ] 00:27:57.048 [2024-05-15 03:22:28.041935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.048 [2024-05-15 03:22:28.135366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.048 [2024-05-15 03:22:28.135373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.982 03:22:28 compress_isal -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.982 03:22:28 compress_isal -- common/autotest_common.sh@860 -- # return 0 00:27:57.982 03:22:28 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:27:57.982 03:22:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:57.982 03:22:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:01.265 03:22:31 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:01.265 03:22:31 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:01.265 03:22:32 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:01.523 [ 00:28:01.523 { 00:28:01.523 "name": "Nvme0n1", 00:28:01.523 "aliases": [ 00:28:01.524 "7af848ff-b27e-47a9-9a02-73a9dcb3f061" 00:28:01.524 ], 00:28:01.524 "product_name": "NVMe disk", 00:28:01.524 "block_size": 512, 00:28:01.524 "num_blocks": 1953525168, 00:28:01.524 "uuid": "7af848ff-b27e-47a9-9a02-73a9dcb3f061", 00:28:01.524 "assigned_rate_limits": { 00:28:01.524 "rw_ios_per_sec": 0, 00:28:01.524 "rw_mbytes_per_sec": 0, 00:28:01.524 "r_mbytes_per_sec": 0, 00:28:01.524 "w_mbytes_per_sec": 0 00:28:01.524 }, 00:28:01.524 "claimed": false, 00:28:01.524 "zoned": false, 00:28:01.524 "supported_io_types": { 00:28:01.524 "read": true, 00:28:01.524 "write": true, 00:28:01.524 "unmap": true, 00:28:01.524 "write_zeroes": true, 00:28:01.524 "flush": true, 00:28:01.524 "reset": true, 00:28:01.524 "compare": false, 00:28:01.524 "compare_and_write": false, 00:28:01.524 "abort": true, 00:28:01.524 "nvme_admin": true, 00:28:01.524 "nvme_io": true 00:28:01.524 }, 00:28:01.524 "driver_specific": { 00:28:01.524 "nvme": [ 00:28:01.524 { 00:28:01.524 "pci_address": "0000:5e:00.0", 00:28:01.524 "trid": { 00:28:01.524 "trtype": "PCIe", 00:28:01.524 "traddr": "0000:5e:00.0" 00:28:01.524 }, 00:28:01.524 "ctrlr_data": { 00:28:01.524 "cntlid": 0, 00:28:01.524 "vendor_id": "0x8086", 00:28:01.524 "model_number": "INTEL SSDPE2KX010T8", 00:28:01.524 "serial_number": "BTLJ807001JM1P0FGN", 00:28:01.524 "firmware_revision": "VDV10170", 00:28:01.524 "oacs": { 00:28:01.524 "security": 1, 00:28:01.524 "format": 1, 00:28:01.524 "firmware": 1, 00:28:01.524 "ns_manage": 1 00:28:01.524 }, 00:28:01.524 "multi_ctrlr": false, 00:28:01.524 "ana_reporting": false 00:28:01.524 }, 00:28:01.524 "vs": { 00:28:01.524 "nvme_version": "1.2" 00:28:01.524 }, 00:28:01.524 "ns_data": { 00:28:01.524 "id": 1, 00:28:01.524 "can_share": false 00:28:01.524 }, 00:28:01.524 "security": { 00:28:01.524 "opal": true 00:28:01.524 } 00:28:01.524 } 00:28:01.524 ], 00:28:01.524 "mp_policy": "active_passive" 00:28:01.524 } 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 03:22:32 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:01.524 03:22:32 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:02.465 6a5ae240-2886-4e0c-ab25-16812e18f218 00:28:02.466 03:22:33 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:02.729 e6958534-4db5-4c30-8dbd-45d75b3a6437 00:28:02.729 03:22:33 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:02.729 03:22:33 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:02.986 03:22:34 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:03.265 [ 00:28:03.265 { 00:28:03.265 "name": "e6958534-4db5-4c30-8dbd-45d75b3a6437", 00:28:03.265 "aliases": [ 00:28:03.265 "lvs0/lv0" 00:28:03.265 ], 00:28:03.265 "product_name": "Logical Volume", 00:28:03.265 "block_size": 512, 00:28:03.265 "num_blocks": 204800, 00:28:03.265 "uuid": "e6958534-4db5-4c30-8dbd-45d75b3a6437", 00:28:03.265 "assigned_rate_limits": { 00:28:03.265 "rw_ios_per_sec": 0, 00:28:03.265 "rw_mbytes_per_sec": 0, 00:28:03.265 "r_mbytes_per_sec": 0, 00:28:03.265 "w_mbytes_per_sec": 0 00:28:03.265 }, 00:28:03.265 "claimed": false, 00:28:03.265 "zoned": false, 00:28:03.265 "supported_io_types": { 00:28:03.265 "read": true, 00:28:03.265 "write": true, 00:28:03.265 "unmap": true, 00:28:03.265 "write_zeroes": true, 00:28:03.265 "flush": false, 00:28:03.265 "reset": true, 00:28:03.265 "compare": false, 00:28:03.265 "compare_and_write": false, 00:28:03.265 "abort": false, 00:28:03.265 "nvme_admin": false, 00:28:03.265 "nvme_io": false 00:28:03.265 }, 00:28:03.265 "driver_specific": { 00:28:03.265 "lvol": { 00:28:03.265 "lvol_store_uuid": "6a5ae240-2886-4e0c-ab25-16812e18f218", 00:28:03.265 "base_bdev": "Nvme0n1", 00:28:03.265 "thin_provision": true, 00:28:03.265 "num_allocated_clusters": 0, 00:28:03.265 "snapshot": false, 00:28:03.265 "clone": false, 00:28:03.265 "esnap_clone": false 00:28:03.265 } 00:28:03.265 } 00:28:03.265 } 00:28:03.265 ] 00:28:03.265 03:22:34 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:03.265 03:22:34 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:28:03.265 03:22:34 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:28:03.523 [2024-05-15 03:22:34.505078] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:03.523 COMP_lvs0/lv0 00:28:03.523 03:22:34 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:03.523 03:22:34 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:03.781 03:22:34 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:04.039 [ 00:28:04.039 { 00:28:04.039 "name": "COMP_lvs0/lv0", 00:28:04.039 "aliases": [ 00:28:04.039 "2bc88ddb-1a5e-59c8-a51d-8a2d5fcbd3c4" 00:28:04.039 ], 00:28:04.039 "product_name": "compress", 00:28:04.039 "block_size": 4096, 00:28:04.039 "num_blocks": 25088, 00:28:04.039 "uuid": "2bc88ddb-1a5e-59c8-a51d-8a2d5fcbd3c4", 00:28:04.039 "assigned_rate_limits": { 00:28:04.039 "rw_ios_per_sec": 0, 00:28:04.039 "rw_mbytes_per_sec": 0, 00:28:04.039 "r_mbytes_per_sec": 0, 00:28:04.039 "w_mbytes_per_sec": 0 00:28:04.039 }, 00:28:04.039 "claimed": false, 00:28:04.039 "zoned": false, 00:28:04.039 "supported_io_types": { 00:28:04.039 "read": true, 00:28:04.039 "write": true, 00:28:04.039 "unmap": false, 00:28:04.039 "write_zeroes": true, 00:28:04.039 "flush": false, 00:28:04.039 "reset": false, 00:28:04.039 "compare": false, 00:28:04.039 "compare_and_write": false, 00:28:04.039 "abort": false, 00:28:04.039 "nvme_admin": false, 00:28:04.039 "nvme_io": false 00:28:04.039 }, 00:28:04.039 "driver_specific": { 00:28:04.039 "compress": { 00:28:04.039 "name": "COMP_lvs0/lv0", 00:28:04.039 "base_bdev_name": "e6958534-4db5-4c30-8dbd-45d75b3a6437" 00:28:04.039 } 00:28:04.039 } 00:28:04.039 } 00:28:04.039 ] 00:28:04.039 03:22:35 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:04.039 03:22:35 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:04.039 Running I/O for 3 seconds... 00:28:07.323 00:28:07.323 Latency(us) 00:28:07.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.323 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:07.323 Verification LBA range: start 0x0 length 0x3100 00:28:07.323 COMP_lvs0/lv0 : 3.01 3339.37 13.04 0.00 0.00 9521.82 59.73 13668.94 00:28:07.323 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:07.323 Verification LBA range: start 0x3100 length 0x3100 00:28:07.323 COMP_lvs0/lv0 : 3.01 3375.74 13.19 0.00 0.00 9426.69 56.32 14168.26 00:28:07.323 =================================================================================================================== 00:28:07.323 Total : 6715.11 26.23 0.00 0.00 9473.97 56.32 14168.26 00:28:07.323 0 00:28:07.323 03:22:38 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:28:07.323 03:22:38 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:07.323 03:22:38 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:07.581 03:22:38 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:07.581 03:22:38 compress_isal -- compress/compress.sh@78 -- # killprocess 44881 00:28:07.581 03:22:38 compress_isal -- common/autotest_common.sh@946 -- # '[' -z 44881 ']' 00:28:07.581 03:22:38 compress_isal -- common/autotest_common.sh@950 -- # kill -0 44881 00:28:07.581 03:22:38 compress_isal -- common/autotest_common.sh@951 -- # uname 00:28:07.581 03:22:38 compress_isal -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.839 03:22:38 compress_isal -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 44881 00:28:07.839 03:22:38 compress_isal -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:07.839 03:22:38 compress_isal -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:07.839 03:22:38 compress_isal -- common/autotest_common.sh@964 -- # echo 'killing process with pid 44881' 00:28:07.839 killing process with pid 44881 00:28:07.839 03:22:38 compress_isal -- common/autotest_common.sh@965 -- # kill 44881 00:28:07.839 Received shutdown signal, test time was about 3.000000 seconds 00:28:07.839 00:28:07.839 Latency(us) 00:28:07.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.839 =================================================================================================================== 00:28:07.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.840 03:22:38 compress_isal -- common/autotest_common.sh@970 -- # wait 44881 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=46939 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:28:09.214 03:22:40 compress_isal -- compress/compress.sh@57 -- # waitforlisten 46939 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@827 -- # '[' -z 46939 ']' 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.214 03:22:40 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:28:09.214 [2024-05-15 03:22:40.305502] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:28:09.214 [2024-05-15 03:22:40.305562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46939 ] 00:28:09.472 [2024-05-15 03:22:40.405162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.472 [2024-05-15 03:22:40.502255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.472 [2024-05-15 03:22:40.502349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.472 [2024-05-15 03:22:40.502354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.404 03:22:41 compress_isal -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.404 03:22:41 compress_isal -- common/autotest_common.sh@860 -- # return 0 00:28:10.404 03:22:41 compress_isal -- compress/compress.sh@58 -- # create_vols 00:28:10.404 03:22:41 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:10.404 03:22:41 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:13.685 03:22:44 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=Nvme0n1 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:13.685 03:22:44 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:13.943 [ 00:28:13.943 { 00:28:13.943 "name": "Nvme0n1", 00:28:13.943 "aliases": [ 00:28:13.943 "13f8bd1e-a3d0-4bc8-bbba-bcb10bf2d247" 00:28:13.943 ], 00:28:13.943 "product_name": "NVMe disk", 00:28:13.943 "block_size": 512, 00:28:13.943 "num_blocks": 1953525168, 00:28:13.943 "uuid": "13f8bd1e-a3d0-4bc8-bbba-bcb10bf2d247", 00:28:13.943 "assigned_rate_limits": { 00:28:13.943 "rw_ios_per_sec": 0, 00:28:13.943 "rw_mbytes_per_sec": 0, 00:28:13.943 "r_mbytes_per_sec": 0, 00:28:13.943 "w_mbytes_per_sec": 0 00:28:13.943 }, 00:28:13.943 "claimed": false, 00:28:13.943 "zoned": false, 00:28:13.943 "supported_io_types": { 00:28:13.943 "read": true, 00:28:13.943 "write": true, 00:28:13.943 "unmap": true, 00:28:13.943 "write_zeroes": true, 00:28:13.943 "flush": true, 00:28:13.943 "reset": true, 00:28:13.943 "compare": false, 00:28:13.943 "compare_and_write": false, 00:28:13.943 "abort": true, 00:28:13.943 "nvme_admin": true, 00:28:13.943 "nvme_io": true 00:28:13.943 }, 00:28:13.943 "driver_specific": { 00:28:13.943 "nvme": [ 00:28:13.943 { 00:28:13.943 "pci_address": "0000:5e:00.0", 00:28:13.943 "trid": { 00:28:13.943 "trtype": "PCIe", 00:28:13.943 "traddr": "0000:5e:00.0" 00:28:13.943 }, 00:28:13.943 "ctrlr_data": { 00:28:13.943 "cntlid": 0, 00:28:13.943 "vendor_id": "0x8086", 00:28:13.943 "model_number": "INTEL SSDPE2KX010T8", 00:28:13.943 "serial_number": "BTLJ807001JM1P0FGN", 00:28:13.943 "firmware_revision": "VDV10170", 00:28:13.943 "oacs": { 00:28:13.943 "security": 1, 00:28:13.943 "format": 1, 00:28:13.943 "firmware": 1, 00:28:13.943 "ns_manage": 1 00:28:13.943 }, 00:28:13.943 "multi_ctrlr": false, 00:28:13.943 "ana_reporting": false 00:28:13.943 }, 00:28:13.943 "vs": { 00:28:13.943 "nvme_version": "1.2" 00:28:13.943 }, 00:28:13.943 "ns_data": { 00:28:13.943 "id": 1, 00:28:13.943 "can_share": false 00:28:13.943 }, 00:28:13.943 "security": { 00:28:13.943 "opal": true 00:28:13.943 } 00:28:13.943 } 00:28:13.943 ], 00:28:13.943 "mp_policy": "active_passive" 00:28:13.943 } 00:28:13.943 } 00:28:13.943 ] 00:28:13.943 03:22:44 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:13.943 03:22:44 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:15.318 5da65d39-9fb7-4e63-92e2-48b9d7b6730e 00:28:15.318 03:22:46 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:15.318 1d6e6beb-1869-4cf7-bcb4-60de9ffeee36 00:28:15.318 03:22:46 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=lvs0/lv0 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:15.318 03:22:46 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:15.576 03:22:46 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:15.834 [ 00:28:15.834 { 00:28:15.834 "name": "1d6e6beb-1869-4cf7-bcb4-60de9ffeee36", 00:28:15.834 "aliases": [ 00:28:15.834 "lvs0/lv0" 00:28:15.834 ], 00:28:15.834 "product_name": "Logical Volume", 00:28:15.834 "block_size": 512, 00:28:15.834 "num_blocks": 204800, 00:28:15.834 "uuid": "1d6e6beb-1869-4cf7-bcb4-60de9ffeee36", 00:28:15.834 "assigned_rate_limits": { 00:28:15.834 "rw_ios_per_sec": 0, 00:28:15.834 "rw_mbytes_per_sec": 0, 00:28:15.834 "r_mbytes_per_sec": 0, 00:28:15.834 "w_mbytes_per_sec": 0 00:28:15.834 }, 00:28:15.834 "claimed": false, 00:28:15.834 "zoned": false, 00:28:15.834 "supported_io_types": { 00:28:15.834 "read": true, 00:28:15.834 "write": true, 00:28:15.834 "unmap": true, 00:28:15.834 "write_zeroes": true, 00:28:15.834 "flush": false, 00:28:15.834 "reset": true, 00:28:15.834 "compare": false, 00:28:15.834 "compare_and_write": false, 00:28:15.834 "abort": false, 00:28:15.834 "nvme_admin": false, 00:28:15.834 "nvme_io": false 00:28:15.834 }, 00:28:15.834 "driver_specific": { 00:28:15.834 "lvol": { 00:28:15.834 "lvol_store_uuid": "5da65d39-9fb7-4e63-92e2-48b9d7b6730e", 00:28:15.834 "base_bdev": "Nvme0n1", 00:28:15.834 "thin_provision": true, 00:28:15.834 "num_allocated_clusters": 0, 00:28:15.834 "snapshot": false, 00:28:15.834 "clone": false, 00:28:15.834 "esnap_clone": false 00:28:15.834 } 00:28:15.834 } 00:28:15.834 } 00:28:15.834 ] 00:28:15.834 03:22:46 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:15.834 03:22:46 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:15.834 03:22:46 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:16.093 [2024-05-15 03:22:47.134316] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:16.093 COMP_lvs0/lv0 00:28:16.093 03:22:47 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@895 -- # local bdev_name=COMP_lvs0/lv0 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@897 -- # local i 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:28:16.093 03:22:47 compress_isal -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:16.351 03:22:47 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:16.609 [ 00:28:16.609 { 00:28:16.609 "name": "COMP_lvs0/lv0", 00:28:16.609 "aliases": [ 00:28:16.609 "1b47d5e1-1cb0-54f3-95de-9b6ec9e3521e" 00:28:16.609 ], 00:28:16.609 "product_name": "compress", 00:28:16.609 "block_size": 512, 00:28:16.609 "num_blocks": 200704, 00:28:16.609 "uuid": "1b47d5e1-1cb0-54f3-95de-9b6ec9e3521e", 00:28:16.609 "assigned_rate_limits": { 00:28:16.609 "rw_ios_per_sec": 0, 00:28:16.609 "rw_mbytes_per_sec": 0, 00:28:16.609 "r_mbytes_per_sec": 0, 00:28:16.609 "w_mbytes_per_sec": 0 00:28:16.609 }, 00:28:16.609 "claimed": false, 00:28:16.609 "zoned": false, 00:28:16.609 "supported_io_types": { 00:28:16.609 "read": true, 00:28:16.609 "write": true, 00:28:16.609 "unmap": false, 00:28:16.609 "write_zeroes": true, 00:28:16.609 "flush": false, 00:28:16.609 "reset": false, 00:28:16.609 "compare": false, 00:28:16.609 "compare_and_write": false, 00:28:16.609 "abort": false, 00:28:16.609 "nvme_admin": false, 00:28:16.609 "nvme_io": false 00:28:16.609 }, 00:28:16.609 "driver_specific": { 00:28:16.609 "compress": { 00:28:16.609 "name": "COMP_lvs0/lv0", 00:28:16.609 "base_bdev_name": "1d6e6beb-1869-4cf7-bcb4-60de9ffeee36" 00:28:16.609 } 00:28:16.609 } 00:28:16.609 } 00:28:16.609 ] 00:28:16.609 03:22:47 compress_isal -- common/autotest_common.sh@903 -- # return 0 00:28:16.609 03:22:47 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:16.867 I/O targets: 00:28:16.867 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:28:16.867 00:28:16.867 00:28:16.867 CUnit - A unit testing framework for C - Version 2.1-3 00:28:16.867 http://cunit.sourceforge.net/ 00:28:16.867 00:28:16.867 00:28:16.867 Suite: bdevio tests on: COMP_lvs0/lv0 00:28:16.867 Test: blockdev write read block ...passed 00:28:16.867 Test: blockdev write zeroes read block ...passed 00:28:16.867 Test: blockdev write zeroes read no split ...passed 00:28:16.867 Test: blockdev write zeroes read split ...passed 00:28:16.867 Test: blockdev write zeroes read split partial ...passed 00:28:16.867 Test: blockdev reset ...[2024-05-15 03:22:47.863341] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:28:16.867 passed 00:28:16.867 Test: blockdev write read 8 blocks ...passed 00:28:16.867 Test: blockdev write read size > 128k ...passed 00:28:16.867 Test: blockdev write read invalid size ...passed 00:28:16.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:16.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:16.867 Test: blockdev write read max offset ...passed 00:28:16.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:16.867 Test: blockdev writev readv 8 blocks ...passed 00:28:16.867 Test: blockdev writev readv 30 x 1block ...passed 00:28:16.867 Test: blockdev writev readv block ...passed 00:28:16.867 Test: blockdev writev readv size > 128k ...passed 00:28:16.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:16.867 Test: blockdev comparev and writev ...passed 00:28:16.867 Test: blockdev nvme passthru rw ...passed 00:28:16.867 Test: blockdev nvme passthru vendor specific ...passed 00:28:16.867 Test: blockdev nvme admin passthru ...passed 00:28:16.867 Test: blockdev copy ...passed 00:28:16.867 00:28:16.867 Run Summary: Type Total Ran Passed Failed Inactive 00:28:16.867 suites 1 1 n/a 0 0 00:28:16.867 tests 23 23 23 0 0 00:28:16.867 asserts 130 130 130 0 n/a 00:28:16.867 00:28:16.867 Elapsed time = 0.170 seconds 00:28:16.867 0 00:28:16.867 03:22:47 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:28:16.867 03:22:47 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:17.126 03:22:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:17.384 03:22:48 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:28:17.384 03:22:48 compress_isal -- compress/compress.sh@62 -- # killprocess 46939 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@946 -- # '[' -z 46939 ']' 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@950 -- # kill -0 46939 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@951 -- # uname 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46939 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46939' 00:28:17.384 killing process with pid 46939 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@965 -- # kill 46939 00:28:17.384 03:22:48 compress_isal -- common/autotest_common.sh@970 -- # wait 46939 00:28:19.286 03:22:49 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:28:19.286 03:22:49 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:28:19.286 00:28:19.286 real 0m47.306s 00:28:19.286 user 1m50.595s 00:28:19.286 sys 0m3.428s 00:28:19.287 03:22:49 compress_isal -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.287 03:22:49 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:28:19.287 ************************************ 00:28:19.287 END TEST compress_isal 00:28:19.287 ************************************ 00:28:19.287 03:22:50 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:28:19.287 03:22:50 -- spdk/autotest.sh@352 -- # '[' 1 -eq 1 ']' 00:28:19.287 03:22:50 -- spdk/autotest.sh@353 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:28:19.287 03:22:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.287 03:22:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.287 03:22:50 -- common/autotest_common.sh@10 -- # set +x 00:28:19.287 ************************************ 00:28:19.287 START TEST blockdev_crypto_aesni 00:28:19.287 ************************************ 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:28:19.287 * Looking for test storage... 00:28:19.287 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48591 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 48591 00:28:19.287 03:22:50 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@827 -- # '[' -z 48591 ']' 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.287 03:22:50 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:19.287 [2024-05-15 03:22:50.211794] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:28:19.287 [2024-05-15 03:22:50.211862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48591 ] 00:28:19.287 [2024-05-15 03:22:50.308141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.287 [2024-05-15 03:22:50.404470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.222 03:22:51 blockdev_crypto_aesni -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.222 03:22:51 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # return 0 00:28:20.222 03:22:51 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:28:20.222 03:22:51 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:28:20.222 03:22:51 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:28:20.222 03:22:51 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.222 03:22:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:20.222 [2024-05-15 03:22:51.162775] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:28:20.222 [2024-05-15 03:22:51.170810] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:20.222 [2024-05-15 03:22:51.178828] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:20.222 [2024-05-15 03:22:51.250705] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:28:22.754 true 00:28:22.754 true 00:28:22.754 true 00:28:22.754 true 00:28:22.754 Malloc0 00:28:22.754 Malloc1 00:28:22.754 Malloc2 00:28:22.754 Malloc3 00:28:22.754 [2024-05-15 03:22:53.706877] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:28:22.754 crypto_ram 00:28:22.754 [2024-05-15 03:22:53.714887] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:28:22.754 crypto_ram2 00:28:22.754 [2024-05-15 03:22:53.722911] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:28:22.754 crypto_ram3 00:28:22.754 [2024-05-15 03:22:53.730929] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:28:22.754 crypto_ram4 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "10eef104-42bc-5ba2-8008-fab4a2ccc368"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "10eef104-42bc-5ba2-8008-fab4a2ccc368",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "395d211b-657e-51af-8ce8-a99d25aa8fdd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "395d211b-657e-51af-8ce8-a99d25aa8fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "19d91a61-b052-567c-8f9f-f0a7dbf03cb5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "19d91a61-b052-567c-8f9f-f0a7dbf03cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "e8b0640f-4884-5d51-834c-fbec3773f986"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e8b0640f-4884-5d51-834c-fbec3773f986",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:28:22.754 03:22:53 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 48591 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@946 -- # '[' -z 48591 ']' 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # kill -0 48591 00:28:22.754 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@951 -- # uname 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48591 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48591' 00:28:23.013 killing process with pid 48591 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@965 -- # kill 48591 00:28:23.013 03:22:53 blockdev_crypto_aesni -- common/autotest_common.sh@970 -- # wait 48591 00:28:23.581 03:22:54 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.581 03:22:54 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:28:23.581 03:22:54 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:23.581 03:22:54 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:23.581 03:22:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:23.581 ************************************ 00:28:23.581 START TEST bdev_hello_world 00:28:23.581 ************************************ 00:28:23.581 03:22:54 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:28:23.581 [2024-05-15 03:22:54.566711] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:28:23.581 [2024-05-15 03:22:54.566762] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49299 ] 00:28:23.581 [2024-05-15 03:22:54.665313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.840 [2024-05-15 03:22:54.754923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.840 [2024-05-15 03:22:54.776213] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:28:23.840 [2024-05-15 03:22:54.784235] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:23.840 [2024-05-15 03:22:54.792253] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:23.840 [2024-05-15 03:22:54.894344] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:28:26.428 [2024-05-15 03:22:57.197751] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:28:26.428 [2024-05-15 03:22:57.197816] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:26.428 [2024-05-15 03:22:57.197829] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:26.428 [2024-05-15 03:22:57.205768] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:28:26.428 [2024-05-15 03:22:57.205786] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:26.428 [2024-05-15 03:22:57.205794] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:26.428 [2024-05-15 03:22:57.213789] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:28:26.428 [2024-05-15 03:22:57.213805] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:26.428 [2024-05-15 03:22:57.213814] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:26.428 [2024-05-15 03:22:57.221809] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:28:26.428 [2024-05-15 03:22:57.221825] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:26.428 [2024-05-15 03:22:57.221833] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:26.428 [2024-05-15 03:22:57.293996] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:26.428 [2024-05-15 03:22:57.294037] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:28:26.428 [2024-05-15 03:22:57.294054] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:26.428 [2024-05-15 03:22:57.295378] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:26.428 [2024-05-15 03:22:57.295456] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:26.428 [2024-05-15 03:22:57.295470] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:26.428 [2024-05-15 03:22:57.295514] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:26.428 00:28:26.428 [2024-05-15 03:22:57.295530] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:26.686 00:28:26.686 real 0m3.150s 00:28:26.686 user 0m2.729s 00:28:26.686 sys 0m0.383s 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 ************************************ 00:28:26.686 END TEST bdev_hello_world 00:28:26.686 ************************************ 00:28:26.686 03:22:57 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:28:26.686 03:22:57 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:26.686 03:22:57 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:26.686 03:22:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 ************************************ 00:28:26.686 START TEST bdev_bounds 00:28:26.686 ************************************ 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=49778 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 49778' 00:28:26.686 Process bdevio pid: 49778 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 49778 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 49778 ']' 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:26.686 03:22:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 [2024-05-15 03:22:57.800574] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:28:26.686 [2024-05-15 03:22:57.800625] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49778 ] 00:28:26.944 [2024-05-15 03:22:57.898918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:26.944 [2024-05-15 03:22:57.996226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.944 [2024-05-15 03:22:57.996322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.944 [2024-05-15 03:22:57.996326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.944 [2024-05-15 03:22:58.017672] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:28:26.944 [2024-05-15 03:22:58.025698] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:26.944 [2024-05-15 03:22:58.033720] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:27.201 [2024-05-15 03:22:58.138923] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:28:29.732 [2024-05-15 03:23:00.434797] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:28:29.732 [2024-05-15 03:23:00.434878] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:29.732 [2024-05-15 03:23:00.434891] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:29.732 [2024-05-15 03:23:00.442814] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:28:29.732 [2024-05-15 03:23:00.442832] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:29.732 [2024-05-15 03:23:00.442841] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:29.732 [2024-05-15 03:23:00.450839] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:28:29.732 [2024-05-15 03:23:00.450859] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:29.732 [2024-05-15 03:23:00.450868] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:29.732 [2024-05-15 03:23:00.458868] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:28:29.732 [2024-05-15 03:23:00.458883] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:29.732 [2024-05-15 03:23:00.458891] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:29.732 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:29.733 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:28:29.733 03:23:00 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:29.733 I/O targets: 00:28:29.733 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:28:29.733 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:28:29.733 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:28:29.733 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:28:29.733 00:28:29.733 00:28:29.733 CUnit - A unit testing framework for C - Version 2.1-3 00:28:29.733 http://cunit.sourceforge.net/ 00:28:29.733 00:28:29.733 00:28:29.733 Suite: bdevio tests on: crypto_ram4 00:28:29.733 Test: blockdev write read block ...passed 00:28:29.733 Test: blockdev write zeroes read block ...passed 00:28:29.733 Test: blockdev write zeroes read no split ...passed 00:28:29.733 Test: blockdev write zeroes read split ...passed 00:28:29.733 Test: blockdev write zeroes read split partial ...passed 00:28:29.733 Test: blockdev reset ...passed 00:28:29.733 Test: blockdev write read 8 blocks ...passed 00:28:29.733 Test: blockdev write read size > 128k ...passed 00:28:29.733 Test: blockdev write read invalid size ...passed 00:28:29.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:29.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:29.733 Test: blockdev write read max offset ...passed 00:28:29.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:29.733 Test: blockdev writev readv 8 blocks ...passed 00:28:29.733 Test: blockdev writev readv 30 x 1block ...passed 00:28:29.733 Test: blockdev writev readv block ...passed 00:28:29.733 Test: blockdev writev readv size > 128k ...passed 00:28:29.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:29.733 Test: blockdev comparev and writev ...passed 00:28:29.733 Test: blockdev nvme passthru rw ...passed 00:28:29.733 Test: blockdev nvme passthru vendor specific ...passed 00:28:29.733 Test: blockdev nvme admin passthru ...passed 00:28:29.733 Test: blockdev copy ...passed 00:28:29.733 Suite: bdevio tests on: crypto_ram3 00:28:29.733 Test: blockdev write read block ...passed 00:28:29.733 Test: blockdev write zeroes read block ...passed 00:28:29.733 Test: blockdev write zeroes read no split ...passed 00:28:29.733 Test: blockdev write zeroes read split ...passed 00:28:29.733 Test: blockdev write zeroes read split partial ...passed 00:28:29.733 Test: blockdev reset ...passed 00:28:29.733 Test: blockdev write read 8 blocks ...passed 00:28:29.733 Test: blockdev write read size > 128k ...passed 00:28:29.733 Test: blockdev write read invalid size ...passed 00:28:29.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:29.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:29.733 Test: blockdev write read max offset ...passed 00:28:29.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:29.733 Test: blockdev writev readv 8 blocks ...passed 00:28:29.733 Test: blockdev writev readv 30 x 1block ...passed 00:28:29.733 Test: blockdev writev readv block ...passed 00:28:29.733 Test: blockdev writev readv size > 128k ...passed 00:28:29.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:29.733 Test: blockdev comparev and writev ...passed 00:28:29.733 Test: blockdev nvme passthru rw ...passed 00:28:29.733 Test: blockdev nvme passthru vendor specific ...passed 00:28:29.733 Test: blockdev nvme admin passthru ...passed 00:28:29.733 Test: blockdev copy ...passed 00:28:29.733 Suite: bdevio tests on: crypto_ram2 00:28:29.733 Test: blockdev write read block ...passed 00:28:29.733 Test: blockdev write zeroes read block ...passed 00:28:29.733 Test: blockdev write zeroes read no split ...passed 00:28:29.733 Test: blockdev write zeroes read split ...passed 00:28:29.733 Test: blockdev write zeroes read split partial ...passed 00:28:29.733 Test: blockdev reset ...passed 00:28:29.733 Test: blockdev write read 8 blocks ...passed 00:28:29.733 Test: blockdev write read size > 128k ...passed 00:28:29.733 Test: blockdev write read invalid size ...passed 00:28:29.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:29.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:29.733 Test: blockdev write read max offset ...passed 00:28:29.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:29.733 Test: blockdev writev readv 8 blocks ...passed 00:28:29.733 Test: blockdev writev readv 30 x 1block ...passed 00:28:29.733 Test: blockdev writev readv block ...passed 00:28:29.733 Test: blockdev writev readv size > 128k ...passed 00:28:29.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:29.733 Test: blockdev comparev and writev ...passed 00:28:29.733 Test: blockdev nvme passthru rw ...passed 00:28:29.733 Test: blockdev nvme passthru vendor specific ...passed 00:28:29.733 Test: blockdev nvme admin passthru ...passed 00:28:29.733 Test: blockdev copy ...passed 00:28:29.733 Suite: bdevio tests on: crypto_ram 00:28:29.733 Test: blockdev write read block ...passed 00:28:29.733 Test: blockdev write zeroes read block ...passed 00:28:29.733 Test: blockdev write zeroes read no split ...passed 00:28:29.733 Test: blockdev write zeroes read split ...passed 00:28:29.992 Test: blockdev write zeroes read split partial ...passed 00:28:29.992 Test: blockdev reset ...passed 00:28:29.992 Test: blockdev write read 8 blocks ...passed 00:28:29.992 Test: blockdev write read size > 128k ...passed 00:28:29.992 Test: blockdev write read invalid size ...passed 00:28:29.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:29.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:29.992 Test: blockdev write read max offset ...passed 00:28:29.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:29.992 Test: blockdev writev readv 8 blocks ...passed 00:28:29.992 Test: blockdev writev readv 30 x 1block ...passed 00:28:29.992 Test: blockdev writev readv block ...passed 00:28:29.992 Test: blockdev writev readv size > 128k ...passed 00:28:29.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:29.992 Test: blockdev comparev and writev ...passed 00:28:29.992 Test: blockdev nvme passthru rw ...passed 00:28:29.992 Test: blockdev nvme passthru vendor specific ...passed 00:28:29.992 Test: blockdev nvme admin passthru ...passed 00:28:29.992 Test: blockdev copy ...passed 00:28:29.992 00:28:29.992 Run Summary: Type Total Ran Passed Failed Inactive 00:28:29.992 suites 4 4 n/a 0 0 00:28:29.992 tests 92 92 92 0 0 00:28:29.992 asserts 520 520 520 0 n/a 00:28:29.992 00:28:29.992 Elapsed time = 0.536 seconds 00:28:29.992 0 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 49778 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 49778 ']' 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 49778 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:29.992 03:23:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 49778 00:28:29.992 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:29.992 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:29.992 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49778' 00:28:29.992 killing process with pid 49778 00:28:29.992 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@965 -- # kill 49778 00:28:29.992 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@970 -- # wait 49778 00:28:30.250 03:23:01 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:28:30.250 00:28:30.250 real 0m3.644s 00:28:30.250 user 0m10.242s 00:28:30.250 sys 0m0.576s 00:28:30.250 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.250 03:23:01 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:30.250 ************************************ 00:28:30.250 END TEST bdev_bounds 00:28:30.250 ************************************ 00:28:30.509 03:23:01 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:28:30.509 03:23:01 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:28:30.509 03:23:01 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.509 03:23:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:30.509 ************************************ 00:28:30.509 START TEST bdev_nbd 00:28:30.509 ************************************ 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=50453 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 50453 /var/tmp/spdk-nbd.sock 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 50453 ']' 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:30.509 03:23:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:30.509 [2024-05-15 03:23:01.529407] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:28:30.509 [2024-05-15 03:23:01.529459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.509 [2024-05-15 03:23:01.626729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.768 [2024-05-15 03:23:01.720845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.768 [2024-05-15 03:23:01.742196] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:28:30.768 [2024-05-15 03:23:01.750230] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:30.768 [2024-05-15 03:23:01.758236] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:30.768 [2024-05-15 03:23:01.862882] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:28:33.297 [2024-05-15 03:23:04.173496] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:28:33.297 [2024-05-15 03:23:04.173552] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:33.297 [2024-05-15 03:23:04.173565] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.297 [2024-05-15 03:23:04.181515] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:28:33.297 [2024-05-15 03:23:04.181532] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:33.297 [2024-05-15 03:23:04.181541] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.297 [2024-05-15 03:23:04.189535] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:28:33.297 [2024-05-15 03:23:04.189551] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:33.297 [2024-05-15 03:23:04.189560] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.297 [2024-05-15 03:23:04.197556] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:28:33.297 [2024-05-15 03:23:04.197572] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:33.297 [2024-05-15 03:23:04.197580] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:33.297 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.556 1+0 records in 00:28:33.556 1+0 records out 00:28:33.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255383 s, 16.0 MB/s 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:33.556 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.814 1+0 records in 00:28:33.814 1+0 records out 00:28:33.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278938 s, 14.7 MB/s 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:33.814 03:23:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:34.073 1+0 records in 00:28:34.073 1+0 records out 00:28:34.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261658 s, 15.7 MB/s 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:34.073 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:34.332 1+0 records in 00:28:34.332 1+0 records out 00:28:34.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304076 s, 13.5 MB/s 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:34.332 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:34.591 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd0", 00:28:34.591 "bdev_name": "crypto_ram" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd1", 00:28:34.591 "bdev_name": "crypto_ram2" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd2", 00:28:34.591 "bdev_name": "crypto_ram3" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd3", 00:28:34.591 "bdev_name": "crypto_ram4" 00:28:34.591 } 00:28:34.591 ]' 00:28:34.591 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:34.591 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd0", 00:28:34.591 "bdev_name": "crypto_ram" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd1", 00:28:34.591 "bdev_name": "crypto_ram2" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd2", 00:28:34.591 "bdev_name": "crypto_ram3" 00:28:34.591 }, 00:28:34.591 { 00:28:34.591 "nbd_device": "/dev/nbd3", 00:28:34.591 "bdev_name": "crypto_ram4" 00:28:34.591 } 00:28:34.591 ]' 00:28:34.591 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:34.849 03:23:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.108 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.367 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.625 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:35.884 03:23:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:36.143 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:28:36.402 /dev/nbd0 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.402 1+0 records in 00:28:36.402 1+0 records out 00:28:36.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277326 s, 14.8 MB/s 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:36.402 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:28:36.660 /dev/nbd1 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:36.660 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.661 1+0 records in 00:28:36.661 1+0 records out 00:28:36.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002874 s, 14.3 MB/s 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:36.661 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:28:36.919 /dev/nbd10 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.919 1+0 records in 00:28:36.919 1+0 records out 00:28:36.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282878 s, 14.5 MB/s 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:36.919 03:23:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:28:37.178 /dev/nbd11 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:37.178 1+0 records in 00:28:37.178 1+0 records out 00:28:37.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302436 s, 13.5 MB/s 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:37.178 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:37.436 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd0", 00:28:37.436 "bdev_name": "crypto_ram" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd1", 00:28:37.436 "bdev_name": "crypto_ram2" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd10", 00:28:37.436 "bdev_name": "crypto_ram3" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd11", 00:28:37.436 "bdev_name": "crypto_ram4" 00:28:37.436 } 00:28:37.436 ]' 00:28:37.436 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd0", 00:28:37.436 "bdev_name": "crypto_ram" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd1", 00:28:37.436 "bdev_name": "crypto_ram2" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd10", 00:28:37.436 "bdev_name": "crypto_ram3" 00:28:37.436 }, 00:28:37.436 { 00:28:37.436 "nbd_device": "/dev/nbd11", 00:28:37.436 "bdev_name": "crypto_ram4" 00:28:37.436 } 00:28:37.436 ]' 00:28:37.436 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:37.437 /dev/nbd1 00:28:37.437 /dev/nbd10 00:28:37.437 /dev/nbd11' 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:37.437 /dev/nbd1 00:28:37.437 /dev/nbd10 00:28:37.437 /dev/nbd11' 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:37.437 256+0 records in 00:28:37.437 256+0 records out 00:28:37.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671486 s, 156 MB/s 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:37.437 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:37.696 256+0 records in 00:28:37.696 256+0 records out 00:28:37.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0440414 s, 23.8 MB/s 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:37.696 256+0 records in 00:28:37.696 256+0 records out 00:28:37.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0480912 s, 21.8 MB/s 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:37.696 256+0 records in 00:28:37.696 256+0 records out 00:28:37.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0419697 s, 25.0 MB/s 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:37.696 256+0 records in 00:28:37.696 256+0 records out 00:28:37.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0394372 s, 26.6 MB/s 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.696 03:23:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.955 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:38.213 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:38.779 03:23:09 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:39.036 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:39.037 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:39.294 malloc_lvol_verify 00:28:39.294 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:39.553 1d080740-96f6-47db-95b6-c0ee1ee583b8 00:28:39.553 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:39.811 49b40bb9-1428-49cf-9fdb-2849fd0a48c6 00:28:39.811 03:23:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:40.069 /dev/nbd0 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:40.069 mke2fs 1.46.5 (30-Dec-2021) 00:28:40.069 Discarding device blocks: 0/4096 done 00:28:40.069 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:40.069 00:28:40.069 Allocating group tables: 0/1 done 00:28:40.069 Writing inode tables: 0/1 done 00:28:40.069 Creating journal (1024 blocks): done 00:28:40.069 Writing superblocks and filesystem accounting information: 0/1 done 00:28:40.069 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:40.069 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 50453 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 50453 ']' 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 50453 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:40.327 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50453 00:28:40.586 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:40.586 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:40.586 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50453' 00:28:40.586 killing process with pid 50453 00:28:40.586 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@965 -- # kill 50453 00:28:40.586 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@970 -- # wait 50453 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:28:40.845 00:28:40.845 real 0m10.435s 00:28:40.845 user 0m14.559s 00:28:40.845 sys 0m3.259s 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:40.845 ************************************ 00:28:40.845 END TEST bdev_nbd 00:28:40.845 ************************************ 00:28:40.845 03:23:11 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:28:40.845 03:23:11 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:28:40.845 03:23:11 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:40.845 03:23:11 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:28:40.845 ************************************ 00:28:40.845 START TEST bdev_fio 00:28:40.845 ************************************ 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:40.845 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:28:40.845 03:23:11 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:41.104 ************************************ 00:28:41.104 START TEST bdev_fio_rw_verify 00:28:41.104 ************************************ 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:41.104 03:23:12 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:41.363 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:41.363 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:41.363 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:41.363 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:41.363 fio-3.35 00:28:41.363 Starting 4 threads 00:28:56.279 00:28:56.279 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=52837: Wed May 15 03:23:25 2024 00:28:56.279 read: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(746MiB/10001msec) 00:28:56.279 slat (usec): min=18, max=1195, avg=69.04, stdev=32.48 00:28:56.279 clat (usec): min=13, max=1959, avg=375.32, stdev=209.79 00:28:56.279 lat (usec): min=46, max=2168, avg=444.36, stdev=225.33 00:28:56.279 clat percentiles (usec): 00:28:56.279 | 50.000th=[ 330], 99.000th=[ 922], 99.900th=[ 1074], 99.990th=[ 1401], 00:28:56.279 | 99.999th=[ 1827] 00:28:56.279 write: IOPS=21.1k, BW=82.5MiB/s (86.6MB/s)(806MiB/9759msec); 0 zone resets 00:28:56.279 slat (usec): min=24, max=415, avg=83.28, stdev=30.10 00:28:56.279 clat (usec): min=38, max=1729, avg=455.61, stdev=248.24 00:28:56.279 lat (usec): min=85, max=1949, avg=538.89, stdev=261.07 00:28:56.279 clat percentiles (usec): 00:28:56.279 | 50.000th=[ 416], 99.000th=[ 1188], 99.900th=[ 1352], 99.990th=[ 1467], 00:28:56.279 | 99.999th=[ 1532] 00:28:56.279 bw ( KiB/s): min=64936, max=116944, per=97.52%, avg=82438.74, stdev=3640.13, samples=76 00:28:56.279 iops : min=16234, max=29236, avg=20609.68, stdev=910.03, samples=76 00:28:56.279 lat (usec) : 20=0.01%, 50=0.01%, 100=3.44%, 250=24.71%, 500=39.78% 00:28:56.279 lat (usec) : 750=22.66%, 1000=7.40% 00:28:56.279 lat (msec) : 2=2.02% 00:28:56.279 cpu : usr=99.61%, sys=0.00%, ctx=72, majf=0, minf=228 00:28:56.279 IO depths : 1=9.9%, 2=25.6%, 4=51.3%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.279 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.279 issued rwts: total=191047,206233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.279 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.279 00:28:56.279 Run status group 0 (all jobs): 00:28:56.279 READ: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=746MiB (783MB), run=10001-10001msec 00:28:56.279 WRITE: bw=82.5MiB/s (86.6MB/s), 82.5MiB/s-82.5MiB/s (86.6MB/s-86.6MB/s), io=806MiB (845MB), run=9759-9759msec 00:28:56.279 00:28:56.279 real 0m13.533s 00:28:56.279 user 0m50.231s 00:28:56.279 sys 0m0.505s 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 ************************************ 00:28:56.279 END TEST bdev_fio_rw_verify 00:28:56.279 ************************************ 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:28:56.279 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "10eef104-42bc-5ba2-8008-fab4a2ccc368"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "10eef104-42bc-5ba2-8008-fab4a2ccc368",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "395d211b-657e-51af-8ce8-a99d25aa8fdd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "395d211b-657e-51af-8ce8-a99d25aa8fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "19d91a61-b052-567c-8f9f-f0a7dbf03cb5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "19d91a61-b052-567c-8f9f-f0a7dbf03cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "e8b0640f-4884-5d51-834c-fbec3773f986"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e8b0640f-4884-5d51-834c-fbec3773f986",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:28:56.280 crypto_ram2 00:28:56.280 crypto_ram3 00:28:56.280 crypto_ram4 ]] 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "10eef104-42bc-5ba2-8008-fab4a2ccc368"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "10eef104-42bc-5ba2-8008-fab4a2ccc368",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "395d211b-657e-51af-8ce8-a99d25aa8fdd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "395d211b-657e-51af-8ce8-a99d25aa8fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "19d91a61-b052-567c-8f9f-f0a7dbf03cb5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "19d91a61-b052-567c-8f9f-f0a7dbf03cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "e8b0640f-4884-5d51-834c-fbec3773f986"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e8b0640f-4884-5d51-834c-fbec3773f986",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:56.280 ************************************ 00:28:56.280 START TEST bdev_fio_trim 00:28:56.280 ************************************ 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:56.280 03:23:25 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:56.280 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:56.280 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:56.280 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:56.280 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:56.280 fio-3.35 00:28:56.280 Starting 4 threads 00:29:08.491 00:29:08.491 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=55227: Wed May 15 03:23:38 2024 00:29:08.491 write: IOPS=32.5k, BW=127MiB/s (133MB/s)(1268MiB/10001msec); 0 zone resets 00:29:08.491 slat (usec): min=18, max=1112, avg=68.25, stdev=26.73 00:29:08.491 clat (usec): min=28, max=1546, avg=311.97, stdev=152.04 00:29:08.491 lat (usec): min=57, max=1660, avg=380.22, stdev=163.99 00:29:08.491 clat percentiles (usec): 00:29:08.491 | 50.000th=[ 285], 99.000th=[ 750], 99.900th=[ 840], 99.990th=[ 1012], 00:29:08.491 | 99.999th=[ 1385] 00:29:08.491 bw ( KiB/s): min=125776, max=173667, per=100.00%, avg=129979.95, stdev=3091.90, samples=76 00:29:08.491 iops : min=31444, max=43416, avg=32494.95, stdev=772.92, samples=76 00:29:08.491 trim: IOPS=32.5k, BW=127MiB/s (133MB/s)(1268MiB/10001msec); 0 zone resets 00:29:08.491 slat (usec): min=6, max=284, avg=20.15, stdev= 8.37 00:29:08.491 clat (usec): min=57, max=1440, avg=292.59, stdev=111.55 00:29:08.492 lat (usec): min=68, max=1464, avg=312.74, stdev=113.56 00:29:08.492 clat percentiles (usec): 00:29:08.492 | 50.000th=[ 285], 99.000th=[ 545], 99.900th=[ 611], 99.990th=[ 717], 00:29:08.492 | 99.999th=[ 1188] 00:29:08.492 bw ( KiB/s): min=125768, max=173691, per=100.00%, avg=129981.21, stdev=3092.36, samples=76 00:29:08.492 iops : min=31442, max=43422, avg=32495.26, stdev=773.03, samples=76 00:29:08.492 lat (usec) : 50=0.01%, 100=2.86%, 250=37.04%, 500=52.13%, 750=7.49% 00:29:08.492 lat (usec) : 1000=0.47% 00:29:08.492 lat (msec) : 2=0.01% 00:29:08.492 cpu : usr=99.59%, sys=0.00%, ctx=74, majf=0, minf=90 00:29:08.492 IO depths : 1=6.9%, 2=26.6%, 4=53.2%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.492 complete : 0=0.0%, 4=88.3%, 8=11.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.492 issued rwts: total=0,324667,324669,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.492 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:08.492 00:29:08.492 Run status group 0 (all jobs): 00:29:08.492 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1268MiB (1330MB), run=10001-10001msec 00:29:08.492 TRIM: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1268MiB (1330MB), run=10001-10001msec 00:29:08.492 00:29:08.492 real 0m13.529s 00:29:08.492 user 0m49.861s 00:29:08.492 sys 0m0.503s 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:29:08.492 ************************************ 00:29:08.492 END TEST bdev_fio_trim 00:29:08.492 ************************************ 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:29:08.492 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:29:08.492 00:29:08.492 real 0m27.389s 00:29:08.492 user 1m40.278s 00:29:08.492 sys 0m1.161s 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:08.492 ************************************ 00:29:08.492 END TEST bdev_fio 00:29:08.492 ************************************ 00:29:08.492 03:23:39 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:08.492 03:23:39 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:08.492 03:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:29:08.492 03:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:08.492 03:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:08.492 ************************************ 00:29:08.492 START TEST bdev_verify 00:29:08.492 ************************************ 00:29:08.492 03:23:39 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:08.492 [2024-05-15 03:23:39.493991] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:08.492 [2024-05-15 03:23:39.494044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56850 ] 00:29:08.492 [2024-05-15 03:23:39.593198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:08.751 [2024-05-15 03:23:39.685953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.751 [2024-05-15 03:23:39.685959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.751 [2024-05-15 03:23:39.707318] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:08.751 [2024-05-15 03:23:39.715349] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:08.751 [2024-05-15 03:23:39.723371] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:08.751 [2024-05-15 03:23:39.828450] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:11.287 [2024-05-15 03:23:42.130187] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:11.287 [2024-05-15 03:23:42.130256] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:11.287 [2024-05-15 03:23:42.130269] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:11.287 [2024-05-15 03:23:42.138205] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:11.287 [2024-05-15 03:23:42.138225] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:11.287 [2024-05-15 03:23:42.138234] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:11.287 [2024-05-15 03:23:42.146230] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:11.287 [2024-05-15 03:23:42.146246] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:11.287 [2024-05-15 03:23:42.146254] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:11.287 [2024-05-15 03:23:42.154253] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:11.287 [2024-05-15 03:23:42.154269] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:11.287 [2024-05-15 03:23:42.154277] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:11.287 Running I/O for 5 seconds... 00:29:16.555 00:29:16.555 Latency(us) 00:29:16.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.555 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.555 Verification LBA range: start 0x0 length 0x1000 00:29:16.555 crypto_ram : 5.07 454.43 1.78 0.00 0.00 281116.37 5554.96 181753.17 00:29:16.555 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:16.555 Verification LBA range: start 0x1000 length 0x1000 00:29:16.555 crypto_ram : 5.07 454.51 1.78 0.00 0.00 281060.36 5929.45 181753.17 00:29:16.555 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.555 Verification LBA range: start 0x0 length 0x1000 00:29:16.555 crypto_ram2 : 5.07 454.21 1.77 0.00 0.00 280379.67 5617.37 168770.80 00:29:16.555 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:16.555 Verification LBA range: start 0x1000 length 0x1000 00:29:16.555 crypto_ram2 : 5.07 454.38 1.77 0.00 0.00 280312.83 5991.86 168770.80 00:29:16.555 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.556 Verification LBA range: start 0x0 length 0x1000 00:29:16.556 crypto_ram3 : 5.05 3495.41 13.65 0.00 0.00 36302.85 5991.86 28086.86 00:29:16.556 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:16.556 Verification LBA range: start 0x1000 length 0x1000 00:29:16.556 crypto_ram3 : 5.05 3500.56 13.67 0.00 0.00 36220.69 10298.51 28835.84 00:29:16.556 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.556 Verification LBA range: start 0x0 length 0x1000 00:29:16.556 crypto_ram4 : 5.05 3494.56 13.65 0.00 0.00 36210.94 5898.24 28711.01 00:29:16.556 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:16.556 Verification LBA range: start 0x1000 length 0x1000 00:29:16.556 crypto_ram4 : 5.06 3517.23 13.74 0.00 0.00 35982.98 3448.44 28835.84 00:29:16.556 =================================================================================================================== 00:29:16.556 Total : 15825.29 61.82 0.00 0.00 64349.82 3448.44 181753.17 00:29:16.814 00:29:16.814 real 0m8.288s 00:29:16.814 user 0m15.714s 00:29:16.814 sys 0m0.405s 00:29:16.814 03:23:47 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:16.814 03:23:47 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:16.814 ************************************ 00:29:16.814 END TEST bdev_verify 00:29:16.814 ************************************ 00:29:16.814 03:23:47 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:16.814 03:23:47 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:29:16.814 03:23:47 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:16.814 03:23:47 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:16.814 ************************************ 00:29:16.814 START TEST bdev_verify_big_io 00:29:16.814 ************************************ 00:29:16.814 03:23:47 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:16.814 [2024-05-15 03:23:47.857505] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:16.814 [2024-05-15 03:23:47.857556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58223 ] 00:29:16.814 [2024-05-15 03:23:47.953752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:17.073 [2024-05-15 03:23:48.045829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.073 [2024-05-15 03:23:48.045835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.073 [2024-05-15 03:23:48.067320] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:17.073 [2024-05-15 03:23:48.075351] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:17.073 [2024-05-15 03:23:48.083370] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:17.073 [2024-05-15 03:23:48.184998] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:19.604 [2024-05-15 03:23:50.480254] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:19.604 [2024-05-15 03:23:50.480330] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:19.604 [2024-05-15 03:23:50.480342] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:19.604 [2024-05-15 03:23:50.488274] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:19.604 [2024-05-15 03:23:50.488291] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:19.604 [2024-05-15 03:23:50.488300] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:19.604 [2024-05-15 03:23:50.496295] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:19.604 [2024-05-15 03:23:50.496310] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:19.604 [2024-05-15 03:23:50.496319] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:19.604 [2024-05-15 03:23:50.504317] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:19.604 [2024-05-15 03:23:50.504332] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:19.604 [2024-05-15 03:23:50.504341] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:19.604 Running I/O for 5 seconds... 00:29:22.893 [2024-05-15 03:23:53.681971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.682796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.684508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.685701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.687268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.687699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.689400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.690900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.693066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.693817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.695798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.697559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.699242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.699837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.701425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.703242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.705584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.706477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.707993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.709791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.711504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.712811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.714308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.716122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.717647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.719346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.720857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.722658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.724595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.726471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.728380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.730371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.731343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.732861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.734665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.736460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.738783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.740297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.742093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.743893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.745561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.747082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.748885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.750690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.753914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.755456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.757260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.759059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.761583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.763469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.765448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.767433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.770571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.772375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.774171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.775813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.777733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.779539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.781336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.782668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.785454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.787271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.789080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.790073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.791923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.793716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.893 [2024-05-15 03:23:53.795501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.796206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.799383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.801183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.802981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.803499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.805646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.807544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.809493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.809921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.813179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.814982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.816714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.817705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.819930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.821747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.823185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.823605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.826704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.828496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.829606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.831230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.833482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.835290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.836083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.836501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.839688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.841478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.841995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.843700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.845877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.847693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.848519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.848943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.852174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.853980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.854474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.856204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.858412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.860389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.860818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.861241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.864246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.865880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.866989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.867969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.870200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.872173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.872600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.873021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.876024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.877710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.878741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.880225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.882413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.883835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.884256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.884673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.887310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.888708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.888757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.890217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.891729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.892172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.892221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.892636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.893932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.895469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.895520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.897154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.897736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.898169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.898216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.898644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.900047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.902001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.902070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.903829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.904359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.904783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.904830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.905871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.907164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.908826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.908881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.910093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.910655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.911098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.911145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.912745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.914063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.915727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.915778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.916362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.916951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.917632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.894 [2024-05-15 03:23:53.917679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.919038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.922308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.922388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.922810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.922861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.924331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.924387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.925751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.925800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.928558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.928617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.929039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.929083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.931397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.931454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.933040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.933090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.935371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.935431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.935846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.935897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.938030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.938100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.939903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.939953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.941618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.941673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.942096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.942142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.943874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.943931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.945169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.945219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.946865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.946922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.947340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.947383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.949108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.949166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.949782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.949846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.951556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.951611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.952106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.952153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.954470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.954536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.955311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.955357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.957177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.957243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.958412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.958459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.960236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.960294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.961729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.961776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.963515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.963571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.965336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.965386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.966562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.966621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.968508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.968553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.970725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.970783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.972348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.972400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.973198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.973256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.974840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.974901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.977012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.977071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.978729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.978777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.978792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.979071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.980003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.980059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.981689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.981737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.981752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.982089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.983305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.983739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.983789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.984220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.984677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.984842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.985273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.985326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.985749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.986118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.895 [2024-05-15 03:23:53.987224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.987277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.987318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.987373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.987810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.987989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.988054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.988096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.988155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.988556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.990956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.991377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.992340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.992393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.992439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.992481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.992844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.993015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.993062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.993104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.993161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.993638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.994880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.994932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.994973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.995761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.996115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.997998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.998046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.998089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.998130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.998590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.999671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.999724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.999767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:53.999809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.000961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.002902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.003237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.004416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.004467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.004509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.004552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.004936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.005093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.005140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.005183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.005225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.005620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.006705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.006756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.006812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.006863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.007326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.007502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.007548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.007604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.007665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.008075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.009502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.009554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.009595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.009635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.009939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.010108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.010155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.896 [2024-05-15 03:23:54.010198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.010243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.010568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.011716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.011771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.011813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.011860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.012855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.013953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.014799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.015203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.016470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.016539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.016594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.016638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.017785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.018802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.018859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.018901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.018941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.019955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.020893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.020946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.020990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.021754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.022165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.023961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.024001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.024285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.025338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.025745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.025824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.027436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.027716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.027905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.028395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.028474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.030353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.030635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.031942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.032462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.032535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.034156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.897 [2024-05-15 03:23:54.034437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.034618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.035469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.035542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.037161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.037470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.038604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.039873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.039946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.040606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.040894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.041072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.041480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.041551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.041949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.042270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.043274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.044552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.044624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.045704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.045993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.046198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.046600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.046667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.047071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.047354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:22.898 [2024-05-15 03:23:54.048414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.050274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.113340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.113862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.113913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.114305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.117387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.117452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.119188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.121022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.121080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.122597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.170 [2024-05-15 03:23:54.123846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.124298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.125636] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.127130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.127598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.129411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.130474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.132484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.134809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.135252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.135806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.137412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.139559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.141491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.142494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.143988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.145567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.146010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.147430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.148950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.151217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.152398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.154298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.156006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.157632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.158120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.159783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.161648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.163821] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.164634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.166136] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.167677] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.169373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.170575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.172076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.173607] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.175474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.177094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.178602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.180131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.181922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.183911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.185731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.187372] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.188496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.190076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.191847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.193809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.195963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.197456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.198985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.200784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.202307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.203800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.205304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.207084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.209826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.211348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.212790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.214574] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.216771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.218374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.219923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.221720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.225399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.227385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.229040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.230815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.232949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.234731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.236675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.238530] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.241342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.242879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.244666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.246171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.248047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.249580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.170 [2024-05-15 03:23:54.251381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.252495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.255632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.257164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.258960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.259718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.261951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.263680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.265519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.265961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.268888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.270755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.272544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.273788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.275774] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.277579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.278930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.279364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.282313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.284133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.285013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.286825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.288866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.290676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.291118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.291544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.294600] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.296360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.297630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.299130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.301368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.302709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.303158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.303582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.306017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.307934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.309611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.310996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.311966] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.312403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.314326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.171 [2024-05-15 03:23:54.315997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.318314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.318752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.319187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.321066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.322043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.324016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.325707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.326162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.329049] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.329529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.329594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.331242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.332083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.332520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.332576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.333712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.335052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.336455] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.336514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.337760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.338413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.338856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.338921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.340885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.342228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.343788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.343846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.344282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.344797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.345730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.345789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.347169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.348450] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.349918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.349975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.350422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.351090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.352965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.353029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.354787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.356138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.356669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.356724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.357156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.359102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:23.458 [2024-05-15 03:23:54.359219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.360494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.360540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.361826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.361862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.361918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.362342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.362389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.362764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.364841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.364904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.366806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.366874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.368645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.368701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.369126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.369172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.369496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.370997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.371054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.372433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.372480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.374141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.374199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.374634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.374680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.375006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.376511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.458 [2024-05-15 03:23:54.376568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.377217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.377269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.378976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.379034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.379453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.379501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.379787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.381833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.381901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.382630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.382679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.384443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.384502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.385540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.385590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.385941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.387424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.387479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.388886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.388936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.390693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.390752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.392445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.392495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.392817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.393687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.393749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.395773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.395822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.397704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.397765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.399525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.399588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.399880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.400417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.400474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.402325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.402370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.404571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.404637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.405085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.405143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.405535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.406082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.406137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.406557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.406624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.408646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.408705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.409140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.409193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.409602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.410141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.410196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.410617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.410666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.412516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.412574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.413005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.413060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.413432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.413973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.414028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.414458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.414502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.416011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.416441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.416486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.416918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.417259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.417419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.417843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.417895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.418316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.419906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.419957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.419999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.420805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.422253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.422305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.422361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.422403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.422838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.423020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.459 [2024-05-15 03:23:54.423090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.423151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.423206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.424975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.425714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.427965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.428017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.429606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.429657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.429711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.429752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.430147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.430323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.430385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.430447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.430506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.432313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.432384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.432439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.432492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.432945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.433105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.433163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.433205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.433247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.434602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.434654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.434696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.434737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.435081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.435239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.435285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.435327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.435372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.437364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.437416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.437476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.437521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.437941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.438106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.438152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.438194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.438234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.439524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.439575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.439615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.439655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.439997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.440161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.440211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.440254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.440299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.441597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.441649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.441690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.441731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.442106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.442264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.442311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.442352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.442393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.443647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.443699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.443740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.460 [2024-05-15 03:23:54.443786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.444077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.444235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.444297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.444342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.444383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.445767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.445819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.445867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.445911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.446195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.446350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.446402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.446443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.446503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.447893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.447957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.448614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.450965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.451007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.452988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.453035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.453076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.453117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.454969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.455018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.456495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.456908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.456982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.458553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.458993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.459652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.459701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.461068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.462503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.463732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.463806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.464498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.464935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.466300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.466351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.467203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.468490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.468904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.468980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.469371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.469798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.471635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.471693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.472378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.473732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.474142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.474217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.475314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.475796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.477153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.477218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.479684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.479765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.480420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.481327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.481409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.483236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.485936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.461 [2024-05-15 03:23:54.487298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.487348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.489079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.489665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.491650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.491741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.492139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.496229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.497000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.497052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.498676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.499283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.500610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.500680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.501613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.503037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.504534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.504585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.506214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.506756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.507611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.507683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.509415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.510771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.512034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.512087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.513579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.514079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.515053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.515133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.516964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.522130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.522757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.522821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.524613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.525069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.525820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.525899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.527071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.531670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.532445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.532497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.533863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.534315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.535722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.535794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.536328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.539822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.541729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.541789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.543689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.544247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.545268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.545340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.546931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.550606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.552620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.552674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.554063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.554603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.556281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.556355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.556748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.561308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.563281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.563336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.565287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.565747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.567719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.567796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.569251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.572796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.574349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.574397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.576175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.462 [2024-05-15 03:23:54.576676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.578652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.578723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.580645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.587157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.587217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.588118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.588492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.590322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.590401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.592204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.592428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.595988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.597906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.599328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.600087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.600536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.601012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.602696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.604181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.604514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.610234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.612035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.463 [2024-05-15 03:23:54.613058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.722 [2024-05-15 03:23:54.614683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.616768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.617419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.618937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.620678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.620971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.627354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.629300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.630535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.631486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.632778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.634089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.635586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.637111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.637401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.643246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.644584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.645009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.645426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.647422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.648949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.650726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.651477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.651765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.656708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.657568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.659048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.660596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.662888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.664089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.665572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.667077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.667367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.671670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.673173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.674661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.676188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.677813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.679635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.681212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.682745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.683038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.685889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.687691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.689708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.691464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.692592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.694102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.695804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.697724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.698018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.700136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.701119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.702246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.703756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.705976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.707514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.708974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.710473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.710814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.713367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.713801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.715799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.716236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.718161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.719682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.721475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.722213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.722502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.728519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.729491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.730705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.731650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.733732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.735562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.737311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.738549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.738888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.744046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.745775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.746236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.747900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.749862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.751668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.752673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.723 [2024-05-15 03:23:54.754649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.754966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.759127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.760757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.761227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.762911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.764937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.766711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.767446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.768935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.769223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.773876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.774922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.775966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.777080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.778338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.779553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.781338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.782719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.783091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.787842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.788913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.790026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.791710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.793090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.794613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.795274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.796686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.797108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.802478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.803377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.804931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.805500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.806979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.808389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.809800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.810898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.811191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.815978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.817098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.818510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.819923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.821973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.823524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.824157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.825908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.826410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.831489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.831547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.832764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.832814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.833632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.835526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.835576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.836068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.836363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.841352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.841412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.843048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.843092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.843601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.844787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.844833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.846045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.846380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.851367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.851426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.851840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.851892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.852324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.853440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.853492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.854507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.854804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.859890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.859948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.860372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.860419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.860861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.862483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.862533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.863092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.863383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.868057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.724 [2024-05-15 03:23:54.868116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.869286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.869332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.870829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.870891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.872366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.872416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.872706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.877372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.877431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.877471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.877898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.879413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.879471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.880254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.725 [2024-05-15 03:23:54.880305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.880593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.885062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.886702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.886748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.887516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.889703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.889767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.890642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.890691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.891046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.895469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.896352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.896402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.897927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.899829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.899891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.901532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.901578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.902004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.902949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.903384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.903431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.905098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.907441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.907515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.909306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.909363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.909734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.913204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.914171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.914219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.915242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.916808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.916875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.918567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.918613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.918913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.921972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.923445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.923494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.923998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.925054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.925113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.926643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.926689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.927085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.931069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.931505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.931555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.931989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.933043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.933100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.935092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.935148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.935611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.938381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.940164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.940215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.940635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.941544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.941603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.943228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.943275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.943758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.947031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.948392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.948440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.949073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.950040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.950096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.950519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.950573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.950987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.954755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.955201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.955253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.955675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.956966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.957022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.958796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.986 [2024-05-15 03:23:54.958844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.959324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.961805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.961865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.963589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.963638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.964658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.964716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.965147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.965198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.965488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.968240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.968308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.968350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.968392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.969404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.969460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.969506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.969548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.969836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.971755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.971812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.971866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.971908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.972480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.972527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.972568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.972608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.972985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.974865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.974919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.974960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.975001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.975431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.975478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.975519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.975559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.976048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.978435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.978491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.978531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.978571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.979021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.979071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.979112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.979153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.979508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.982867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.982923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.982964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.983961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.987939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.988267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.991919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.991978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.992682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.993117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.996790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.996856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.996897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.996938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.997411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.997459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.997500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.997541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:54.997999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.001667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.001720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.001761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.001801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.002288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.002336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.002381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.002422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.002795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.006493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.006546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.006587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.006628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.007073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.007122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.007163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.007207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.007705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.011447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.011499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.011539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.011580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.012049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.012100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.012149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.012191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.012522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.015486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.015539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.015579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.015623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.016117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.016165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.016207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.016248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.016670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.020505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.020559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.020599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.020644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.021081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.021128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.021180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.021223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.021665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.025377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.025430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.025477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.025519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.025960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.026016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.026058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.026100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.026387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.028924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.028977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.029679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.030037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.033766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.033819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.033866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.033907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.034333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.034389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.034433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.034475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.034908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.038682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.040188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.040240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.041168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.041602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.041651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.043029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.043076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.043499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.047214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.048609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.048659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.050005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.050910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.050982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.987 [2024-05-15 03:23:55.052835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.052884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.053326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.057118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.058502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.058553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.059487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.060738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.060796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.061880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.061927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.062272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.065268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.067115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.067163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.068719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.070831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.070892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.071313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.071358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.071703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.075258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.076320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.076370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.078041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.078595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.080014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.080060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.080593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.080886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.085482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.086448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.087653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.087698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.088135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.088857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.088904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.090162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.090500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.095875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.095934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.096372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.096434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.096871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.097305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.097357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.099367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.099659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.104870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.104936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.105721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.105767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.106205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.106822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.106877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.108248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.108538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.113076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.113133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.114594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.114642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.115233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.116979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.117027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.118338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.118709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.123707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.123765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.124755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.124802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.125296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.126852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.126900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.128068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.128424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.133953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.134014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.134434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.134480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.134917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.136745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.136800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.138690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:23.988 [2024-05-15 03:23:55.138985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.144505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.144570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.145945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.145992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.146525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.148332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.148389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.148808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.149102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.155257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.155322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.157267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.157325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.157754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.159667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.247 [2024-05-15 03:23:55.159714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.161204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.161628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.166737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.166795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.168592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.168640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.169185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.170905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.170955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.172685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.172980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.178420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.178476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.179715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.179761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.180224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.181769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.181818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.183607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.184004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.189289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.190895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.190943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.191925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.192355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.193401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.193450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.194953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.195307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.199350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.200902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.202697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.203682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.204115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.204731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.206298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.206938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.207228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.213586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.215329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.217282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.219173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.220688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.222021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.222869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.224213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.224550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.230113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.231637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.233427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.234553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.235366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.237247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.237665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.239511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.239801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.244917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.246610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.248395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.249384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.250784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.251906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.252973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.254477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.254782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.260102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.261911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.263315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.265140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.267563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.268009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.269967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.271690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.272024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.278297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.280082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.280991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.282395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.284252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.285039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.286542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.288153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.288444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.294815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.296614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.298007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.298802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.300005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.301443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.302944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.304467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.304759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.248 [2024-05-15 03:23:55.310438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.311506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.313390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.313900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.314699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.316571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.318489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.320198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.320488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.325843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.326822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.328035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.329184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.330520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.332044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.333571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.335373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.335667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.340878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.342560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.343073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.344942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.347123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.348649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.350050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.351078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.351373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.355521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.357399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.357819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.359502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.360943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.362532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.363932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.364867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.365159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.368326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.369249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.370869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.372276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.374654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.375240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.376840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.377266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.377559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.382482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.384251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.384868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.386435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.388684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.390192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.390833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.392701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.393045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.395724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.397595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.399110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.399738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.401495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.402422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.403985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.404743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.249 [2024-05-15 03:23:55.405042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.409477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.410906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.411825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.413398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.415384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.415814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.417696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.419453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.419823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.424190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.425643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.426067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.427960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.428959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.430996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.432642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.433591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.433889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.437591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.437651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.438114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.438163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.440569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.441853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.441908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.442593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.442893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.446876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.446935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.448304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.448353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.448804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.450797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.450845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.451434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.451727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.456094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.456153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.457603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.457652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.458232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.459503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.459550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.460837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.461199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.466357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.466424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.468296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.468343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.468794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.469451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.469499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.471420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.471933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.477007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.477071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.478154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.478202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.479121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.479178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.480575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.512 [2024-05-15 03:23:55.480620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.480927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.486028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.486087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.486131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.487295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.488922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.488981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.489712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.489756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.490092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.494248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.495257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.495306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.496867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.498717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.498774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.499874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.499923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.500212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.503500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.504485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.504536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.506210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.508366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.508428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.508845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.508902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.509193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.513863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.514722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.514769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.515912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.517402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.517459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.517887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.517940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.518309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.520984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.521419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.521488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.521918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.523898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.523954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.525076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.525124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.525492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.527393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.528270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.528320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.529433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.530261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.530345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.530974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.531025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.531320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.533589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.534039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.534097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.534516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.535534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.535592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.536021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.536072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.536536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.538512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.538955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.539008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.539431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.540318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.540374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.540792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.540842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.541155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.543357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.543796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.543847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.544287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.545330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.545389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.545809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.545866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.546283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.548225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.548662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.548711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.549144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.549978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.513 [2024-05-15 03:23:55.550043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.550462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.550512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.550877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.552861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.552927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.553346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.553394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.555317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.555374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.555985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.556035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.556330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.561338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.561404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.561448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.561494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.562949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.563008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.563048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.563089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.563480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.566319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.566372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.566419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.566461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.566996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.567045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.567099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.567153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.567519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.570974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.571015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.571362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.574411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.574466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.574507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.574547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.575023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.575072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.575116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.575159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.575448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.578739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.578793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.578833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.578879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.579368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.579417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.579463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.579504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.579838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.583941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.584277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.585970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.586749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.587047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.588939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.589302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.590349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.590401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.590441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.590487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.590943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.591004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.591053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.591098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.514 [2024-05-15 03:23:55.591384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.592486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.592538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.592579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.592619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.593122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.593170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.593212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.593252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.593538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.594604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.594658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.594701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.594742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.595181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.595237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.595280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.595329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.595638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.596762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.596817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.596868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.596910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.597342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.597390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.597432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.597491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.597970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.599989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.600030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.600318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.601447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.601500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.601540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.601581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.602091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.602138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.602184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.602227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.602517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.603597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.603648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.603689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.603734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.604166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.604227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.604269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.604316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.604602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.605739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.605791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.605831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.605886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.606315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.606363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.606418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.606459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.606746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.607882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.609665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.609717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.610140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.610739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.610788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.612627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.612682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.613026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.614070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.615688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.615741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.616167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.617428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.617486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.618606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.618656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.618947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.620053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.621513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.621565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.621985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.624407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.624471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.625933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.625984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.515 [2024-05-15 03:23:55.626350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.627437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.627877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.627929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.628343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.630225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.630282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.631660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.631710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.632004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.633065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.634457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.634509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.635164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.635713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.636150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.636216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.638120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.638490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.639575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.640013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.641076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.641123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.641626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.643187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.643252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.643664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.644146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.647806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.647875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.649412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.649459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.649961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.650389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.650434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.651554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.651891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.656077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.656136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.657630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.657677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.658173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.659792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.659841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.660468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.660817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.663445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.663505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.665145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.665194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.516 [2024-05-15 03:23:55.665621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.776 [2024-05-15 03:23:55.667335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.776 [2024-05-15 03:23:55.667383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.669043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.669370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.670736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.670792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.671515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.671563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.672048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.673593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.673641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.675149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.675442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.677920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.677977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.679490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.679538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.680013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.680583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.680630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.682065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.682423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.685119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.685184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.687037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.687092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.687535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.689049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.689098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.690609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.690943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.693880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.693967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.694387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.694435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.694870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.696705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.696765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.698587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.698887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.701416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.701472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.703017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.703066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.703492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.703942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.703989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.704405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.704693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.707157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.707215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.708209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.708256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.708697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.710224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.710274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.711789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.712082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.714522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.716008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.716058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.717572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.718059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.719164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.719214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.720857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.721150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.722208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.722865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.723289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.724389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.724863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.726400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.727918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.729487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.729780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.732309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.733743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.734171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.734591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.736630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.738150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.739674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.740663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.740959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.743425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.744170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.744586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.745552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.747611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.749374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.751322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.752638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.753014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.755783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.756212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.756633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.777 [2024-05-15 03:23:55.758366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.760318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.761857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.763047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.764973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.765264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.767419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.767854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.768471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.770021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.772182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.773833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.774807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.776319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.776610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.779748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.781075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.782560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.784067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.786197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.787743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.789230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.790755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.791091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.794675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.796068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.796823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.798748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.800118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.800548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.800970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.802711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.803083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.805297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.806136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.806555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.807054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.808963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.809548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.811279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.812873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.813248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.814953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.816806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.818294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.818967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.820694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.821446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.821873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.822478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.822767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.825685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.827174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.827847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.828270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.830650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.832255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.832872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.834557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.834888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.838085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.838511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.840297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.841852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.844270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.845743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.846430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.846853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.847266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.849011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.850890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.852335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.853027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.853963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.855661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.857321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.857902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.858195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.859614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.860151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.861784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.863484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.865684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.867395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.867931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.868354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.868880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.871057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.872790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.873890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.874956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.875987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.877914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.879204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.880079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.880377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.778 [2024-05-15 03:23:55.881822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.882258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.884174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.885577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.887983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.889357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.890145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.890567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.890989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.892775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.894727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.896133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.896875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.897876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.899562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.901249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.901799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.902095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.903546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.903602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.904266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.904314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.906232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.907152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.907203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.908406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.908720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.910182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.910240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.911744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.911789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.912227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.913912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.913960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.915255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.915622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.917007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.917064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.917482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.917532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.918049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.919840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.919893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.920448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.920836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.922166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.922237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.922652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.922700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.923139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.923573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.923625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.925327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.925773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.928100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.928159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.928587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.928647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.929642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.929707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.930139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.930205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.930623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.932810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.932877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.932919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:24.779 [2024-05-15 03:23:55.933339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.934365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.934420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.934845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.934901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.935303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.936512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.936954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.937006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.937431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.938448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.938505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.938942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.938993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.939389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.940543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.940986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.941039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.941460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.942396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.942452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.942884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.942935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.943320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.944426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.944869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.944921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.945346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.946300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.946355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.946779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.946828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.947212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.948233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.948667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.948718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.949145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.950117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.950174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.950595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.950645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.951003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.952094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.952625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.952675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.954099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.956270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.956329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.956746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.956790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.957277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.958307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.959362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.959415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.961339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.962676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.962736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.963157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.963201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.963521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.964489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.965391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.965442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.966564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.967377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.967433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.967857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.967901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.968193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.969261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.971047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.971098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.972427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.973319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.973375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.973993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.974040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.974333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.975435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.976840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.976912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.978719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.979645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.979703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.981138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.041 [2024-05-15 03:23:55.981189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.981551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.982508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.982559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.983735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.983785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.984670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.984728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.985438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.985486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.985803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.988394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.988460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.988507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.988548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.989366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.989422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.989464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.989519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.990004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.991868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.992234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.993302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.993355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.993406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.993448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.994067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.994116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.994157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.994198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.994526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.995604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.995655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.995697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.995738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.996276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.996324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.996365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.996406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.996822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.998577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.998628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.998669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.998708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.999256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.999305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.999350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.999390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:55.999682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.000674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.000726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.000768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.000808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.001399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.001448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.001493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.001534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.001893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.002830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.002888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.002930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.002971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.003415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.003462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.003509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.003550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.003836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.004919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.004971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.005748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.006043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.042 [2024-05-15 03:23:56.007133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.007875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.008165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.009976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.010262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.011976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.012021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.012062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.012497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.013595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.013659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.013708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.013749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.014188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.014237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.014279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.014320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.014608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.015692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.015746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.015788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.015829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.016368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.016416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.016462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.016505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.016793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.017799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.017857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.017900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.017941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.018362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.018417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.018461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.018515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.018802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.020714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.021064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.022730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.023023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.024046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.024489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.024535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.026202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.026737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.026800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.028639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.028694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.029069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.030383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.032268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.032315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.034207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.035123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.035180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.043 [2024-05-15 03:23:56.036701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.036748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.037038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.038049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.038478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.038523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.038958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.039886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.039941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.040712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.040761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.041115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.042218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.043792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.043845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.044722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.045738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.045796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.047326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.047373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.047659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.048589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.049720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.049768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.051141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.051624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.053442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.053492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.054417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.054863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.056035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.057985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.059841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.059897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.060327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.061536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.061584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.063072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.063399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.064697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.064756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.065178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.065226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.065653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.067559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.067605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.069501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.069789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.072477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.072535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.074325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.074373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.074799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.075234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.075281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.075694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.075989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.078731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.078788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.079545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.079591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.080043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.081591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.081639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.083419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.083707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.086114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.086172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.087681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.087727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.088199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.090016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.090066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.090829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.091125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.093856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.093923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.094343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.094389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.094966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.096673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.096721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.098361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.098691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.101426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.044 [2024-05-15 03:23:56.101490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.103382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.103436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.103870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.105828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.105880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.106296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.106729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.107743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.107819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:29:25.045 [2024-05-15 03:23:56.112331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:29:25.612 00:29:25.612 Latency(us) 00:29:25.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.612 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.612 Verification LBA range: start 0x0 length 0x100 00:29:25.612 crypto_ram : 6.03 41.80 2.61 0.00 0.00 2959120.31 161780.30 3115768.69 00:29:25.612 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.612 Verification LBA range: start 0x100 length 0x100 00:29:25.612 crypto_ram : 6.03 41.48 2.59 0.00 0.00 2976779.44 154789.79 3115768.69 00:29:25.613 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x0 length 0x100 00:29:25.613 crypto_ram2 : 6.03 42.44 2.65 0.00 0.00 2806715.73 71902.35 2955985.68 00:29:25.613 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x100 length 0x100 00:29:25.613 crypto_ram2 : 6.03 42.12 2.63 0.00 0.00 2822627.41 66909.14 3019898.88 00:29:25.613 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x0 length 0x100 00:29:25.613 crypto_ram3 : 5.68 235.31 14.71 0.00 0.00 479022.56 18599.74 762963.87 00:29:25.613 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x100 length 0x100 00:29:25.613 crypto_ram3 : 5.66 228.89 14.31 0.00 0.00 492091.18 18225.25 762963.87 00:29:25.613 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x0 length 0x100 00:29:25.613 crypto_ram4 : 5.78 250.10 15.63 0.00 0.00 433703.92 2637.04 583207.98 00:29:25.613 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.613 Verification LBA range: start 0x100 length 0x100 00:29:25.613 crypto_ram4 : 5.77 246.01 15.38 0.00 0.00 441088.75 10111.27 567229.68 00:29:25.613 =================================================================================================================== 00:29:25.613 Total : 1128.14 70.51 0.00 0.00 838182.18 2637.04 3115768.69 00:29:26.180 00:29:26.180 real 0m9.246s 00:29:26.180 user 0m17.605s 00:29:26.180 sys 0m0.429s 00:29:26.180 03:23:57 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:26.180 03:23:57 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.180 ************************************ 00:29:26.180 END TEST bdev_verify_big_io 00:29:26.180 ************************************ 00:29:26.180 03:23:57 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:26.180 03:23:57 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:26.180 03:23:57 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:26.180 03:23:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:26.180 ************************************ 00:29:26.180 START TEST bdev_write_zeroes 00:29:26.180 ************************************ 00:29:26.180 03:23:57 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:26.180 [2024-05-15 03:23:57.185383] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:26.180 [2024-05-15 03:23:57.185437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59707 ] 00:29:26.180 [2024-05-15 03:23:57.281543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.438 [2024-05-15 03:23:57.373597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.438 [2024-05-15 03:23:57.394897] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:29:26.438 [2024-05-15 03:23:57.402910] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:29:26.438 [2024-05-15 03:23:57.410929] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:29:26.438 [2024-05-15 03:23:57.516861] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:29:28.969 [2024-05-15 03:23:59.824645] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:29:28.969 [2024-05-15 03:23:59.824711] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:28.969 [2024-05-15 03:23:59.824724] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:28.969 [2024-05-15 03:23:59.832664] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:29:28.969 [2024-05-15 03:23:59.832684] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:28.969 [2024-05-15 03:23:59.832698] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:28.969 [2024-05-15 03:23:59.840684] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:29:28.969 [2024-05-15 03:23:59.840700] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:29:28.969 [2024-05-15 03:23:59.840708] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:28.969 [2024-05-15 03:23:59.848705] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:29:28.969 [2024-05-15 03:23:59.848721] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:29:28.969 [2024-05-15 03:23:59.848729] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:28.969 Running I/O for 1 seconds... 00:29:29.905 00:29:29.905 Latency(us) 00:29:29.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.905 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.905 crypto_ram : 1.03 1774.86 6.93 0.00 0.00 71601.38 5929.45 87880.66 00:29:29.905 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.905 crypto_ram2 : 1.03 1780.61 6.96 0.00 0.00 70948.70 5898.24 83886.08 00:29:29.905 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.905 crypto_ram3 : 1.02 13583.67 53.06 0.00 0.00 9272.64 2683.86 12170.97 00:29:29.905 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.905 crypto_ram4 : 1.02 13568.77 53.00 0.00 0.00 9239.21 2699.46 12233.39 00:29:29.905 =================================================================================================================== 00:29:29.905 Total : 30707.91 119.95 0.00 0.00 16469.29 2683.86 87880.66 00:29:30.472 00:29:30.472 real 0m4.225s 00:29:30.472 user 0m3.786s 00:29:30.472 sys 0m0.391s 00:29:30.472 03:24:01 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.472 03:24:01 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:30.472 ************************************ 00:29:30.472 END TEST bdev_write_zeroes 00:29:30.472 ************************************ 00:29:30.472 03:24:01 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.472 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:30.472 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.472 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:30.472 ************************************ 00:29:30.472 START TEST bdev_json_nonenclosed 00:29:30.472 ************************************ 00:29:30.472 03:24:01 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.472 [2024-05-15 03:24:01.485643] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:30.472 [2024-05-15 03:24:01.485697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60510 ] 00:29:30.472 [2024-05-15 03:24:01.585124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.730 [2024-05-15 03:24:01.676256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.731 [2024-05-15 03:24:01.676324] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:30.731 [2024-05-15 03:24:01.676340] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:30.731 [2024-05-15 03:24:01.676352] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:30.731 00:29:30.731 real 0m0.350s 00:29:30.731 user 0m0.238s 00:29:30.731 sys 0m0.110s 00:29:30.731 03:24:01 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.731 03:24:01 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:30.731 ************************************ 00:29:30.731 END TEST bdev_json_nonenclosed 00:29:30.731 ************************************ 00:29:30.731 03:24:01 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.731 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:30.731 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.731 03:24:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:30.731 ************************************ 00:29:30.731 START TEST bdev_json_nonarray 00:29:30.731 ************************************ 00:29:30.731 03:24:01 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.989 [2024-05-15 03:24:01.919095] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:30.989 [2024-05-15 03:24:01.919147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60659 ] 00:29:30.989 [2024-05-15 03:24:02.015298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.989 [2024-05-15 03:24:02.106613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.989 [2024-05-15 03:24:02.106689] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:30.989 [2024-05-15 03:24:02.106705] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:30.989 [2024-05-15 03:24:02.106715] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:31.247 00:29:31.247 real 0m0.352s 00:29:31.247 user 0m0.232s 00:29:31.247 sys 0m0.117s 00:29:31.247 03:24:02 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.247 03:24:02 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:31.247 ************************************ 00:29:31.247 END TEST bdev_json_nonarray 00:29:31.247 ************************************ 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:29:31.247 03:24:02 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:29:31.247 00:29:31.247 real 1m12.216s 00:29:31.247 user 2m50.067s 00:29:31.247 sys 0m7.983s 00:29:31.247 03:24:02 blockdev_crypto_aesni -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.247 03:24:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:29:31.247 ************************************ 00:29:31.247 END TEST blockdev_crypto_aesni 00:29:31.247 ************************************ 00:29:31.247 03:24:02 -- spdk/autotest.sh@354 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:29:31.247 03:24:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:31.247 03:24:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.247 03:24:02 -- common/autotest_common.sh@10 -- # set +x 00:29:31.247 ************************************ 00:29:31.247 START TEST blockdev_crypto_sw 00:29:31.247 ************************************ 00:29:31.247 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:29:31.506 * Looking for test storage... 00:29:31.506 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60727 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:29:31.506 03:24:02 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 60727 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@827 -- # '[' -z 60727 ']' 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:31.506 03:24:02 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:31.506 [2024-05-15 03:24:02.502980] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:31.506 [2024-05-15 03:24:02.503049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:29:31.506 [2024-05-15 03:24:02.600570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.763 [2024-05-15 03:24:02.697736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.330 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:32.330 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # return 0 00:29:32.330 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:29:32.330 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:29:32.330 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:29:32.330 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.330 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.588 Malloc0 00:29:32.588 Malloc1 00:29:32.588 true 00:29:32.588 true 00:29:32.588 true 00:29:32.588 [2024-05-15 03:24:03.704374] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:29:32.588 crypto_ram 00:29:32.588 [2024-05-15 03:24:03.712403] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:29:32.588 crypto_ram2 00:29:32.588 [2024-05-15 03:24:03.720426] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:29:32.588 crypto_ram3 00:29:32.588 [ 00:29:32.588 { 00:29:32.588 "name": "Malloc1", 00:29:32.588 "aliases": [ 00:29:32.588 "06ba4d41-4459-47ac-b800-bab86f42f025" 00:29:32.588 ], 00:29:32.588 "product_name": "Malloc disk", 00:29:32.588 "block_size": 4096, 00:29:32.588 "num_blocks": 4096, 00:29:32.588 "uuid": "06ba4d41-4459-47ac-b800-bab86f42f025", 00:29:32.588 "assigned_rate_limits": { 00:29:32.588 "rw_ios_per_sec": 0, 00:29:32.588 "rw_mbytes_per_sec": 0, 00:29:32.588 "r_mbytes_per_sec": 0, 00:29:32.588 "w_mbytes_per_sec": 0 00:29:32.588 }, 00:29:32.588 "claimed": true, 00:29:32.588 "claim_type": "exclusive_write", 00:29:32.588 "zoned": false, 00:29:32.588 "supported_io_types": { 00:29:32.588 "read": true, 00:29:32.588 "write": true, 00:29:32.588 "unmap": true, 00:29:32.588 "write_zeroes": true, 00:29:32.588 "flush": true, 00:29:32.588 "reset": true, 00:29:32.588 "compare": false, 00:29:32.588 "compare_and_write": false, 00:29:32.588 "abort": true, 00:29:32.588 "nvme_admin": false, 00:29:32.588 "nvme_io": false 00:29:32.588 }, 00:29:32.588 "memory_domains": [ 00:29:32.588 { 00:29:32.588 "dma_device_id": "system", 00:29:32.588 "dma_device_type": 1 00:29:32.588 }, 00:29:32.588 { 00:29:32.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.588 "dma_device_type": 2 00:29:32.588 } 00:29:32.588 ], 00:29:32.588 "driver_specific": {} 00:29:32.588 } 00:29:32.588 ] 00:29:32.588 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.588 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:29:32.588 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.588 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9996325c-b2af-5bfc-8895-756c26e6fc4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9996325c-b2af-5bfc-8895-756c26e6fc4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6684ee29-e319-5256-b7ae-2ce692b37637"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "6684ee29-e319-5256-b7ae-2ce692b37637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:29:32.847 03:24:03 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 60727 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@946 -- # '[' -z 60727 ']' 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # kill -0 60727 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@951 -- # uname 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60727 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60727' 00:29:32.847 killing process with pid 60727 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@965 -- # kill 60727 00:29:32.847 03:24:03 blockdev_crypto_sw -- common/autotest_common.sh@970 -- # wait 60727 00:29:33.415 03:24:04 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:33.415 03:24:04 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:33.415 03:24:04 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:29:33.415 03:24:04 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:33.415 03:24:04 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:33.415 ************************************ 00:29:33.415 START TEST bdev_hello_world 00:29:33.415 ************************************ 00:29:33.415 03:24:04 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:29:33.415 [2024-05-15 03:24:04.419404] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:33.415 [2024-05-15 03:24:04.419457] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61153 ] 00:29:33.415 [2024-05-15 03:24:04.514617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.674 [2024-05-15 03:24:04.605782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.674 [2024-05-15 03:24:04.772042] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:29:33.674 [2024-05-15 03:24:04.772104] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:33.674 [2024-05-15 03:24:04.772117] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:33.674 [2024-05-15 03:24:04.780046] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:29:33.674 [2024-05-15 03:24:04.780064] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:33.674 [2024-05-15 03:24:04.780073] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:33.674 [2024-05-15 03:24:04.788067] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:29:33.674 [2024-05-15 03:24:04.788083] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:29:33.674 [2024-05-15 03:24:04.788091] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:33.674 [2024-05-15 03:24:04.828091] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:33.674 [2024-05-15 03:24:04.828123] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:29:33.674 [2024-05-15 03:24:04.828139] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:33.674 [2024-05-15 03:24:04.829540] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:33.674 [2024-05-15 03:24:04.829610] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:33.674 [2024-05-15 03:24:04.829624] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:33.674 [2024-05-15 03:24:04.829656] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:33.674 00:29:33.674 [2024-05-15 03:24:04.829673] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:33.932 00:29:33.932 real 0m0.689s 00:29:33.932 user 0m0.489s 00:29:33.932 sys 0m0.185s 00:29:33.932 03:24:05 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:33.932 03:24:05 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:33.932 ************************************ 00:29:33.932 END TEST bdev_hello_world 00:29:33.932 ************************************ 00:29:33.932 03:24:05 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:29:33.932 03:24:05 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:33.932 03:24:05 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:33.932 03:24:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:34.191 ************************************ 00:29:34.191 START TEST bdev_bounds 00:29:34.191 ************************************ 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=61222 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 61222' 00:29:34.191 Process bdevio pid: 61222 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 61222 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 61222 ']' 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:34.191 03:24:05 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:34.191 [2024-05-15 03:24:05.184416] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:34.191 [2024-05-15 03:24:05.184471] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61222 ] 00:29:34.191 [2024-05-15 03:24:05.281420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.450 [2024-05-15 03:24:05.378966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.450 [2024-05-15 03:24:05.379061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.450 [2024-05-15 03:24:05.379065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.450 [2024-05-15 03:24:05.540154] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:29:34.450 [2024-05-15 03:24:05.540219] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:34.450 [2024-05-15 03:24:05.540231] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:34.450 [2024-05-15 03:24:05.548176] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:29:34.450 [2024-05-15 03:24:05.548195] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:34.450 [2024-05-15 03:24:05.548204] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:34.450 [2024-05-15 03:24:05.556198] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:29:34.450 [2024-05-15 03:24:05.556216] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:29:34.450 [2024-05-15 03:24:05.556224] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:35.016 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:35.016 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:29:35.016 03:24:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:35.274 I/O targets: 00:29:35.274 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:29:35.274 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:29:35.274 00:29:35.274 00:29:35.274 CUnit - A unit testing framework for C - Version 2.1-3 00:29:35.274 http://cunit.sourceforge.net/ 00:29:35.274 00:29:35.274 00:29:35.274 Suite: bdevio tests on: crypto_ram3 00:29:35.274 Test: blockdev write read block ...passed 00:29:35.274 Test: blockdev write zeroes read block ...passed 00:29:35.274 Test: blockdev write zeroes read no split ...passed 00:29:35.274 Test: blockdev write zeroes read split ...passed 00:29:35.274 Test: blockdev write zeroes read split partial ...passed 00:29:35.274 Test: blockdev reset ...passed 00:29:35.274 Test: blockdev write read 8 blocks ...passed 00:29:35.274 Test: blockdev write read size > 128k ...passed 00:29:35.274 Test: blockdev write read invalid size ...passed 00:29:35.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:35.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:35.274 Test: blockdev write read max offset ...passed 00:29:35.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:35.274 Test: blockdev writev readv 8 blocks ...passed 00:29:35.274 Test: blockdev writev readv 30 x 1block ...passed 00:29:35.274 Test: blockdev writev readv block ...passed 00:29:35.274 Test: blockdev writev readv size > 128k ...passed 00:29:35.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:35.274 Test: blockdev comparev and writev ...passed 00:29:35.274 Test: blockdev nvme passthru rw ...passed 00:29:35.274 Test: blockdev nvme passthru vendor specific ...passed 00:29:35.274 Test: blockdev nvme admin passthru ...passed 00:29:35.274 Test: blockdev copy ...passed 00:29:35.274 Suite: bdevio tests on: crypto_ram 00:29:35.274 Test: blockdev write read block ...passed 00:29:35.274 Test: blockdev write zeroes read block ...passed 00:29:35.274 Test: blockdev write zeroes read no split ...passed 00:29:35.274 Test: blockdev write zeroes read split ...passed 00:29:35.274 Test: blockdev write zeroes read split partial ...passed 00:29:35.274 Test: blockdev reset ...passed 00:29:35.274 Test: blockdev write read 8 blocks ...passed 00:29:35.274 Test: blockdev write read size > 128k ...passed 00:29:35.274 Test: blockdev write read invalid size ...passed 00:29:35.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:35.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:35.274 Test: blockdev write read max offset ...passed 00:29:35.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:35.274 Test: blockdev writev readv 8 blocks ...passed 00:29:35.274 Test: blockdev writev readv 30 x 1block ...passed 00:29:35.274 Test: blockdev writev readv block ...passed 00:29:35.274 Test: blockdev writev readv size > 128k ...passed 00:29:35.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:35.274 Test: blockdev comparev and writev ...passed 00:29:35.274 Test: blockdev nvme passthru rw ...passed 00:29:35.274 Test: blockdev nvme passthru vendor specific ...passed 00:29:35.274 Test: blockdev nvme admin passthru ...passed 00:29:35.275 Test: blockdev copy ...passed 00:29:35.275 00:29:35.275 Run Summary: Type Total Ran Passed Failed Inactive 00:29:35.275 suites 2 2 n/a 0 0 00:29:35.275 tests 46 46 46 0 0 00:29:35.275 asserts 260 260 260 0 n/a 00:29:35.275 00:29:35.275 Elapsed time = 0.086 seconds 00:29:35.275 0 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 61222 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 61222 ']' 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 61222 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61222 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61222' 00:29:35.275 killing process with pid 61222 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@965 -- # kill 61222 00:29:35.275 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@970 -- # wait 61222 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:29:35.534 00:29:35.534 real 0m1.453s 00:29:35.534 user 0m3.903s 00:29:35.534 sys 0m0.324s 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:35.534 ************************************ 00:29:35.534 END TEST bdev_bounds 00:29:35.534 ************************************ 00:29:35.534 03:24:06 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:29:35.534 03:24:06 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:29:35.534 03:24:06 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:35.534 03:24:06 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:35.534 ************************************ 00:29:35.534 START TEST bdev_nbd 00:29:35.534 ************************************ 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=61777 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 61777 /var/tmp/spdk-nbd.sock 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 61777 ']' 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:35.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:35.534 03:24:06 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:35.793 [2024-05-15 03:24:06.720528] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:29:35.793 [2024-05-15 03:24:06.720582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.793 [2024-05-15 03:24:06.817330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.793 [2024-05-15 03:24:06.911714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.051 [2024-05-15 03:24:07.073258] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:29:36.051 [2024-05-15 03:24:07.073319] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:29:36.051 [2024-05-15 03:24:07.073331] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:36.051 [2024-05-15 03:24:07.081277] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:29:36.051 [2024-05-15 03:24:07.081295] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:29:36.051 [2024-05-15 03:24:07.081304] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:36.051 [2024-05-15 03:24:07.089299] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:29:36.051 [2024-05-15 03:24:07.089315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:29:36.051 [2024-05-15 03:24:07.089323] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:36.618 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:29:36.876 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.877 1+0 records in 00:29:36.877 1+0 records out 00:29:36.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230028 s, 17.8 MB/s 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:36.877 03:24:07 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:37.135 1+0 records in 00:29:37.135 1+0 records out 00:29:37.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223199 s, 18.4 MB/s 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:37.135 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:37.394 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:37.394 { 00:29:37.394 "nbd_device": "/dev/nbd0", 00:29:37.394 "bdev_name": "crypto_ram" 00:29:37.394 }, 00:29:37.394 { 00:29:37.394 "nbd_device": "/dev/nbd1", 00:29:37.394 "bdev_name": "crypto_ram3" 00:29:37.394 } 00:29:37.394 ]' 00:29:37.394 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:37.394 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:37.394 { 00:29:37.394 "nbd_device": "/dev/nbd0", 00:29:37.394 "bdev_name": "crypto_ram" 00:29:37.394 }, 00:29:37.394 { 00:29:37.394 "nbd_device": "/dev/nbd1", 00:29:37.394 "bdev_name": "crypto_ram3" 00:29:37.394 } 00:29:37.394 ]' 00:29:37.394 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.693 03:24:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.974 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:38.232 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:38.232 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:38.232 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:38.489 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:38.490 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:29:38.748 /dev/nbd0 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:38.748 1+0 records in 00:29:38.748 1+0 records out 00:29:38.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150983 s, 27.1 MB/s 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:38.748 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:29:39.007 /dev/nbd1 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:39.007 1+0 records in 00:29:39.007 1+0 records out 00:29:39.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263406 s, 15.6 MB/s 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:39.007 03:24:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:39.265 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:39.266 { 00:29:39.266 "nbd_device": "/dev/nbd0", 00:29:39.266 "bdev_name": "crypto_ram" 00:29:39.266 }, 00:29:39.266 { 00:29:39.266 "nbd_device": "/dev/nbd1", 00:29:39.266 "bdev_name": "crypto_ram3" 00:29:39.266 } 00:29:39.266 ]' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:39.266 { 00:29:39.266 "nbd_device": "/dev/nbd0", 00:29:39.266 "bdev_name": "crypto_ram" 00:29:39.266 }, 00:29:39.266 { 00:29:39.266 "nbd_device": "/dev/nbd1", 00:29:39.266 "bdev_name": "crypto_ram3" 00:29:39.266 } 00:29:39.266 ]' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:39.266 /dev/nbd1' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:39.266 /dev/nbd1' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:39.266 256+0 records in 00:29:39.266 256+0 records out 00:29:39.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998278 s, 105 MB/s 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:39.266 256+0 records in 00:29:39.266 256+0 records out 00:29:39.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222272 s, 47.2 MB/s 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:39.266 256+0 records in 00:29:39.266 256+0 records out 00:29:39.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331264 s, 31.7 MB/s 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:39.266 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:39.525 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:39.783 03:24:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:40.041 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:40.300 malloc_lvol_verify 00:29:40.300 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:40.558 637ca22a-8983-4850-9e6f-3e8810a7607c 00:29:40.558 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:40.816 2994fe7e-f153-4c2c-95da-c658952e75ce 00:29:40.817 03:24:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:41.076 /dev/nbd0 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:41.076 mke2fs 1.46.5 (30-Dec-2021) 00:29:41.076 Discarding device blocks: 0/4096 done 00:29:41.076 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:41.076 00:29:41.076 Allocating group tables: 0/1 done 00:29:41.076 Writing inode tables: 0/1 done 00:29:41.076 Creating journal (1024 blocks): done 00:29:41.076 Writing superblocks and filesystem accounting information: 0/1 done 00:29:41.076 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:41.076 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 61777 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 61777 ']' 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 61777 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61777 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61777' 00:29:41.335 killing process with pid 61777 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@965 -- # kill 61777 00:29:41.335 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@970 -- # wait 61777 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:29:41.595 00:29:41.595 real 0m5.999s 00:29:41.595 user 0m9.183s 00:29:41.595 sys 0m1.766s 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:41.595 ************************************ 00:29:41.595 END TEST bdev_nbd 00:29:41.595 ************************************ 00:29:41.595 03:24:12 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:29:41.595 03:24:12 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:29:41.595 03:24:12 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:29:41.595 03:24:12 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:29:41.595 03:24:12 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:41.595 03:24:12 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:41.595 03:24:12 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:29:41.595 ************************************ 00:29:41.595 START TEST bdev_fio 00:29:41.595 ************************************ 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:29:41.595 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:29:41.595 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:41.854 ************************************ 00:29:41.854 START TEST bdev_fio_rw_verify 00:29:41.854 ************************************ 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:41.854 03:24:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:42.112 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:42.112 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:42.112 fio-3.35 00:29:42.112 Starting 2 threads 00:29:54.320 00:29:54.320 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=63253: Wed May 15 03:24:23 2024 00:29:54.320 read: IOPS=24.8k, BW=96.7MiB/s (101MB/s)(967MiB/10001msec) 00:29:54.320 slat (usec): min=8, max=506, avg=18.32, stdev= 9.02 00:29:54.320 clat (usec): min=4, max=707, avg=130.94, stdev=82.53 00:29:54.320 lat (usec): min=14, max=733, avg=149.26, stdev=89.07 00:29:54.320 clat percentiles (usec): 00:29:54.320 | 50.000th=[ 111], 99.000th=[ 420], 99.900th=[ 523], 99.990th=[ 562], 00:29:54.320 | 99.999th=[ 627] 00:29:54.320 write: IOPS=29.8k, BW=116MiB/s (122MB/s)(1104MiB/9487msec); 0 zone resets 00:29:54.320 slat (usec): min=9, max=1479, avg=29.71, stdev=12.66 00:29:54.320 clat (usec): min=16, max=1723, avg=173.60, stdev=116.64 00:29:54.320 lat (usec): min=33, max=1749, avg=203.31, stdev=125.88 00:29:54.320 clat percentiles (usec): 00:29:54.320 | 50.000th=[ 149], 99.000th=[ 570], 99.900th=[ 709], 99.990th=[ 750], 00:29:54.320 | 99.999th=[ 1680] 00:29:54.320 bw ( KiB/s): min=81552, max=125344, per=94.48%, avg=112607.16, stdev=6916.29, samples=38 00:29:54.320 iops : min=20388, max=31336, avg=28151.79, stdev=1729.07, samples=38 00:29:54.320 lat (usec) : 10=0.01%, 20=0.01%, 50=10.28%, 100=24.86%, 250=49.36% 00:29:54.320 lat (usec) : 500=14.45%, 750=1.03%, 1000=0.01% 00:29:54.320 lat (msec) : 2=0.01% 00:29:54.320 cpu : usr=99.63%, sys=0.01%, ctx=35, majf=0, minf=613 00:29:54.320 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.320 issued rwts: total=247629,282684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:54.320 00:29:54.320 Run status group 0 (all jobs): 00:29:54.320 READ: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=967MiB (1014MB), run=10001-10001msec 00:29:54.320 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1104MiB (1158MB), run=9487-9487msec 00:29:54.320 00:29:54.320 real 0m11.191s 00:29:54.320 user 0m28.979s 00:29:54.320 sys 0m0.305s 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:29:54.320 ************************************ 00:29:54.320 END TEST bdev_fio_rw_verify 00:29:54.320 ************************************ 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9996325c-b2af-5bfc-8895-756c26e6fc4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9996325c-b2af-5bfc-8895-756c26e6fc4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6684ee29-e319-5256-b7ae-2ce692b37637"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "6684ee29-e319-5256-b7ae-2ce692b37637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:29:54.320 crypto_ram3 ]] 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9996325c-b2af-5bfc-8895-756c26e6fc4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9996325c-b2af-5bfc-8895-756c26e6fc4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6684ee29-e319-5256-b7ae-2ce692b37637"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "6684ee29-e319-5256-b7ae-2ce692b37637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:29:54.320 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:54.321 ************************************ 00:29:54.321 START TEST bdev_fio_trim 00:29:54.321 ************************************ 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:54.321 03:24:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:29:54.321 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.321 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.321 fio-3.35 00:29:54.321 Starting 2 threads 00:30:04.296 00:30:04.296 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=65196: Wed May 15 03:24:35 2024 00:30:04.296 write: IOPS=55.8k, BW=218MiB/s (228MB/s)(2178MiB/10001msec); 0 zone resets 00:30:04.296 slat (usec): min=8, max=1459, avg=15.61, stdev= 3.84 00:30:04.296 clat (usec): min=18, max=1700, avg=117.44, stdev=65.73 00:30:04.296 lat (usec): min=32, max=1746, avg=133.05, stdev=68.14 00:30:04.296 clat percentiles (usec): 00:30:04.296 | 50.000th=[ 94], 99.000th=[ 251], 99.900th=[ 269], 99.990th=[ 429], 00:30:04.296 | 99.999th=[ 734] 00:30:04.296 bw ( KiB/s): min=213016, max=225776, per=99.97%, avg=222990.74, stdev=1707.06, samples=38 00:30:04.296 iops : min=53254, max=56444, avg=55747.68, stdev=426.77, samples=38 00:30:04.296 trim: IOPS=55.8k, BW=218MiB/s (228MB/s)(2178MiB/10001msec); 0 zone resets 00:30:04.296 slat (nsec): min=3924, max=54286, avg=7397.91, stdev=1869.24 00:30:04.296 clat (usec): min=32, max=450, avg=78.06, stdev=24.52 00:30:04.296 lat (usec): min=37, max=486, avg=85.46, stdev=24.77 00:30:04.296 clat percentiles (usec): 00:30:04.296 | 50.000th=[ 79], 99.000th=[ 133], 99.900th=[ 147], 99.990th=[ 251], 00:30:04.296 | 99.999th=[ 396] 00:30:04.296 bw ( KiB/s): min=213048, max=225784, per=99.97%, avg=222992.42, stdev=1705.16, samples=38 00:30:04.296 iops : min=53262, max=56446, avg=55748.11, stdev=426.29, samples=38 00:30:04.296 lat (usec) : 20=0.01%, 50=16.21%, 100=49.57%, 250=33.72%, 500=0.50% 00:30:04.296 lat (usec) : 750=0.01%, 1000=0.01% 00:30:04.296 lat (msec) : 2=0.01% 00:30:04.296 cpu : usr=99.69%, sys=0.01%, ctx=28, majf=0, minf=260 00:30:04.296 IO depths : 1=7.4%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.296 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.296 issued rwts: total=0,557688,557688,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.296 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:04.296 00:30:04.296 Run status group 0 (all jobs): 00:30:04.296 WRITE: bw=218MiB/s (228MB/s), 218MiB/s-218MiB/s (228MB/s-228MB/s), io=2178MiB (2284MB), run=10001-10001msec 00:30:04.296 TRIM: bw=218MiB/s (228MB/s), 218MiB/s-218MiB/s (228MB/s-228MB/s), io=2178MiB (2284MB), run=10001-10001msec 00:30:04.296 00:30:04.296 real 0m11.196s 00:30:04.296 user 0m28.797s 00:30:04.296 sys 0m0.301s 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:30:04.296 ************************************ 00:30:04.296 END TEST bdev_fio_trim 00:30:04.296 ************************************ 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:30:04.296 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:30:04.296 00:30:04.296 real 0m22.702s 00:30:04.296 user 0m57.961s 00:30:04.296 sys 0m0.746s 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:04.296 03:24:35 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:04.296 ************************************ 00:30:04.296 END TEST bdev_fio 00:30:04.296 ************************************ 00:30:04.555 03:24:35 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:04.555 03:24:35 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:04.555 03:24:35 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:30:04.555 03:24:35 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:04.555 03:24:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:04.555 ************************************ 00:30:04.555 START TEST bdev_verify 00:30:04.555 ************************************ 00:30:04.555 03:24:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:04.555 [2024-05-15 03:24:35.562979] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:04.555 [2024-05-15 03:24:35.563031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67005 ] 00:30:04.555 [2024-05-15 03:24:35.662887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.814 [2024-05-15 03:24:35.754879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.814 [2024-05-15 03:24:35.754883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.814 [2024-05-15 03:24:35.915178] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:04.814 [2024-05-15 03:24:35.915237] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:04.814 [2024-05-15 03:24:35.915249] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:04.814 [2024-05-15 03:24:35.923201] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:04.814 [2024-05-15 03:24:35.923218] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:04.814 [2024-05-15 03:24:35.923227] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:04.814 [2024-05-15 03:24:35.931227] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:04.814 [2024-05-15 03:24:35.931243] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:04.814 [2024-05-15 03:24:35.931251] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:05.072 Running I/O for 5 seconds... 00:30:10.362 00:30:10.362 Latency(us) 00:30:10.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.362 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.362 Verification LBA range: start 0x0 length 0x800 00:30:10.362 crypto_ram : 5.02 5097.62 19.91 0.00 0.00 25002.58 1927.07 31831.77 00:30:10.362 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:10.362 Verification LBA range: start 0x800 length 0x800 00:30:10.362 crypto_ram : 5.01 5108.90 19.96 0.00 0.00 24948.44 1771.03 31831.77 00:30:10.362 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.362 Verification LBA range: start 0x0 length 0x800 00:30:10.362 crypto_ram3 : 5.02 2547.44 9.95 0.00 0.00 49940.39 8238.81 36700.16 00:30:10.362 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:10.362 Verification LBA range: start 0x800 length 0x800 00:30:10.362 crypto_ram3 : 5.03 2571.30 10.04 0.00 0.00 49474.10 2309.36 36700.16 00:30:10.362 =================================================================================================================== 00:30:10.362 Total : 15325.26 59.86 0.00 0.00 33246.46 1771.03 36700.16 00:30:10.362 00:30:10.362 real 0m5.754s 00:30:10.362 user 0m10.885s 00:30:10.362 sys 0m0.197s 00:30:10.362 03:24:41 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:10.362 03:24:41 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:10.362 ************************************ 00:30:10.362 END TEST bdev_verify 00:30:10.362 ************************************ 00:30:10.362 03:24:41 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:10.362 03:24:41 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:30:10.362 03:24:41 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:10.362 03:24:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:10.362 ************************************ 00:30:10.362 START TEST bdev_verify_big_io 00:30:10.362 ************************************ 00:30:10.362 03:24:41 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:10.362 [2024-05-15 03:24:41.384510] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:10.362 [2024-05-15 03:24:41.384561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67956 ] 00:30:10.362 [2024-05-15 03:24:41.484742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:10.621 [2024-05-15 03:24:41.575412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.621 [2024-05-15 03:24:41.575417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.621 [2024-05-15 03:24:41.745263] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:10.621 [2024-05-15 03:24:41.745323] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:10.621 [2024-05-15 03:24:41.745335] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:10.621 [2024-05-15 03:24:41.753285] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:10.621 [2024-05-15 03:24:41.753302] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:10.621 [2024-05-15 03:24:41.753310] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:10.621 [2024-05-15 03:24:41.761308] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:10.621 [2024-05-15 03:24:41.761324] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:10.621 [2024-05-15 03:24:41.761332] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:10.879 Running I/O for 5 seconds... 00:30:16.156 00:30:16.156 Latency(us) 00:30:16.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.156 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.156 Verification LBA range: start 0x0 length 0x80 00:30:16.156 crypto_ram : 5.10 426.87 26.68 0.00 0.00 292381.98 6397.56 401454.81 00:30:16.156 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.156 Verification LBA range: start 0x80 length 0x80 00:30:16.156 crypto_ram : 5.07 429.05 26.82 0.00 0.00 291068.80 6428.77 397460.24 00:30:16.156 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.156 Verification LBA range: start 0x0 length 0x80 00:30:16.156 crypto_ram3 : 5.26 243.16 15.20 0.00 0.00 492924.28 6085.49 413438.54 00:30:16.156 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.156 Verification LBA range: start 0x80 length 0x80 00:30:16.156 crypto_ram3 : 5.24 244.35 15.27 0.00 0.00 490554.42 6023.07 411441.25 00:30:16.156 =================================================================================================================== 00:30:16.156 Total : 1343.44 83.96 0.00 0.00 365804.64 6023.07 413438.54 00:30:16.415 00:30:16.415 real 0m6.006s 00:30:16.415 user 0m11.385s 00:30:16.415 sys 0m0.199s 00:30:16.415 03:24:47 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:16.415 03:24:47 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:16.415 ************************************ 00:30:16.415 END TEST bdev_verify_big_io 00:30:16.415 ************************************ 00:30:16.415 03:24:47 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:16.415 03:24:47 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:30:16.415 03:24:47 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:16.415 03:24:47 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:16.415 ************************************ 00:30:16.415 START TEST bdev_write_zeroes 00:30:16.415 ************************************ 00:30:16.415 03:24:47 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:16.415 [2024-05-15 03:24:47.474677] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:16.415 [2024-05-15 03:24:47.474727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68885 ] 00:30:16.415 [2024-05-15 03:24:47.571022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.674 [2024-05-15 03:24:47.661115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.674 [2024-05-15 03:24:47.827397] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:16.674 [2024-05-15 03:24:47.827458] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:16.674 [2024-05-15 03:24:47.827470] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:16.933 [2024-05-15 03:24:47.835416] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:30:16.933 [2024-05-15 03:24:47.835434] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:16.933 [2024-05-15 03:24:47.835442] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:16.933 [2024-05-15 03:24:47.843437] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:30:16.933 [2024-05-15 03:24:47.843454] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:30:16.933 [2024-05-15 03:24:47.843462] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:16.933 Running I/O for 1 seconds... 00:30:17.870 00:30:17.870 Latency(us) 00:30:17.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.870 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:17.871 crypto_ram : 1.00 24211.53 94.58 0.00 0.00 5271.56 1404.34 7302.58 00:30:17.871 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:17.871 crypto_ram3 : 1.01 12150.95 47.46 0.00 0.00 10446.62 1778.83 10922.67 00:30:17.871 =================================================================================================================== 00:30:17.871 Total : 36362.48 142.04 0.00 0.00 7008.64 1404.34 10922.67 00:30:18.129 00:30:18.129 real 0m1.715s 00:30:18.129 user 0m1.506s 00:30:18.129 sys 0m0.190s 00:30:18.129 03:24:49 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:18.129 03:24:49 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:18.129 ************************************ 00:30:18.129 END TEST bdev_write_zeroes 00:30:18.129 ************************************ 00:30:18.129 03:24:49 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:18.129 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:30:18.129 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.129 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:18.129 ************************************ 00:30:18.129 START TEST bdev_json_nonenclosed 00:30:18.129 ************************************ 00:30:18.129 03:24:49 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:18.129 [2024-05-15 03:24:49.264216] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:18.129 [2024-05-15 03:24:49.264267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69131 ] 00:30:18.387 [2024-05-15 03:24:49.360261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.387 [2024-05-15 03:24:49.449994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.387 [2024-05-15 03:24:49.450061] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:18.387 [2024-05-15 03:24:49.450078] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:18.387 [2024-05-15 03:24:49.450088] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:18.645 00:30:18.645 real 0m0.350s 00:30:18.645 user 0m0.243s 00:30:18.645 sys 0m0.105s 00:30:18.645 03:24:49 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:18.645 03:24:49 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:18.645 ************************************ 00:30:18.645 END TEST bdev_json_nonenclosed 00:30:18.645 ************************************ 00:30:18.645 03:24:49 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:18.645 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:30:18.645 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.645 03:24:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:18.645 ************************************ 00:30:18.645 START TEST bdev_json_nonarray 00:30:18.645 ************************************ 00:30:18.645 03:24:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:18.645 [2024-05-15 03:24:49.695663] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:18.645 [2024-05-15 03:24:49.695715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69222 ] 00:30:18.645 [2024-05-15 03:24:49.793883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.904 [2024-05-15 03:24:49.887600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.904 [2024-05-15 03:24:49.887675] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:18.904 [2024-05-15 03:24:49.887693] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:18.904 [2024-05-15 03:24:49.887705] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:18.904 00:30:18.904 real 0m0.359s 00:30:18.904 user 0m0.235s 00:30:18.904 sys 0m0.122s 00:30:18.904 03:24:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:18.904 03:24:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:18.904 ************************************ 00:30:18.904 END TEST bdev_json_nonarray 00:30:18.904 ************************************ 00:30:18.904 03:24:50 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:30:18.904 03:24:50 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:30:18.904 03:24:50 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:30:18.904 03:24:50 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:30:18.904 03:24:50 blockdev_crypto_sw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:18.904 03:24:50 blockdev_crypto_sw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.904 03:24:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:19.162 ************************************ 00:30:19.162 START TEST bdev_crypto_enomem 00:30:19.162 ************************************ 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1121 -- # bdev_crypto_enomem 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=69391 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 69391 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@827 -- # '[' -z 69391 ']' 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:19.163 03:24:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:19.163 [2024-05-15 03:24:50.135320] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:19.163 [2024-05-15 03:24:50.135372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:30:19.163 [2024-05-15 03:24:50.225311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.163 [2024-05-15 03:24:50.315042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # return 0 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:20.098 true 00:30:20.098 base0 00:30:20.098 true 00:30:20.098 [2024-05-15 03:24:51.111704] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:30:20.098 crypt0 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@895 -- # local bdev_name=crypt0 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local i 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.098 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:20.098 [ 00:30:20.098 { 00:30:20.098 "name": "crypt0", 00:30:20.098 "aliases": [ 00:30:20.098 "7c991f2c-c90f-5c4f-af40-ef7911f0e2b3" 00:30:20.099 ], 00:30:20.099 "product_name": "crypto", 00:30:20.099 "block_size": 512, 00:30:20.099 "num_blocks": 2097152, 00:30:20.099 "uuid": "7c991f2c-c90f-5c4f-af40-ef7911f0e2b3", 00:30:20.099 "assigned_rate_limits": { 00:30:20.099 "rw_ios_per_sec": 0, 00:30:20.099 "rw_mbytes_per_sec": 0, 00:30:20.099 "r_mbytes_per_sec": 0, 00:30:20.099 "w_mbytes_per_sec": 0 00:30:20.099 }, 00:30:20.099 "claimed": false, 00:30:20.099 "zoned": false, 00:30:20.099 "supported_io_types": { 00:30:20.099 "read": true, 00:30:20.099 "write": true, 00:30:20.099 "unmap": false, 00:30:20.099 "write_zeroes": true, 00:30:20.099 "flush": false, 00:30:20.099 "reset": true, 00:30:20.099 "compare": false, 00:30:20.099 "compare_and_write": false, 00:30:20.099 "abort": false, 00:30:20.099 "nvme_admin": false, 00:30:20.099 "nvme_io": false 00:30:20.099 }, 00:30:20.099 "memory_domains": [ 00:30:20.099 { 00:30:20.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.099 "dma_device_type": 2 00:30:20.099 } 00:30:20.099 ], 00:30:20.099 "driver_specific": { 00:30:20.099 "crypto": { 00:30:20.099 "base_bdev_name": "EE_base0", 00:30:20.099 "name": "crypt0", 00:30:20.099 "key_name": "test_dek_sw" 00:30:20.099 } 00:30:20.099 } 00:30:20.099 } 00:30:20.099 ] 00:30:20.099 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.099 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@903 -- # return 0 00:30:20.099 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=69506 00:30:20.099 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:30:20.099 03:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:20.099 Running I/O for 5 seconds... 00:30:21.033 03:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:30:21.033 03:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.033 03:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:21.033 03:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.033 03:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 69506 00:30:25.216 00:30:25.216 Latency(us) 00:30:25.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.216 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:30:25.216 crypt0 : 5.00 32701.41 127.74 0.00 0.00 973.95 448.61 1271.71 00:30:25.216 =================================================================================================================== 00:30:25.216 Total : 32701.41 127.74 0.00 0.00 973.95 448.61 1271.71 00:30:25.216 0 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 69391 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@946 -- # '[' -z 69391 ']' 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # kill -0 69391 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@951 -- # uname 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69391 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69391' 00:30:25.216 killing process with pid 69391 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@965 -- # kill 69391 00:30:25.216 Received shutdown signal, test time was about 5.000000 seconds 00:30:25.216 00:30:25.216 Latency(us) 00:30:25.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.216 =================================================================================================================== 00:30:25.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.216 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@970 -- # wait 69391 00:30:25.475 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:30:25.475 00:30:25.475 real 0m6.473s 00:30:25.475 user 0m6.809s 00:30:25.475 sys 0m0.312s 00:30:25.475 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:25.475 03:24:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:30:25.475 ************************************ 00:30:25.475 END TEST bdev_crypto_enomem 00:30:25.475 ************************************ 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:30:25.475 03:24:56 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:30:25.475 00:30:25.475 real 0m54.259s 00:30:25.475 user 1m45.083s 00:30:25.475 sys 0m5.121s 00:30:25.475 03:24:56 blockdev_crypto_sw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:25.475 03:24:56 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:30:25.475 ************************************ 00:30:25.475 END TEST blockdev_crypto_sw 00:30:25.475 ************************************ 00:30:25.475 03:24:56 -- spdk/autotest.sh@355 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:30:25.475 03:24:56 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:25.475 03:24:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:25.475 03:24:56 -- common/autotest_common.sh@10 -- # set +x 00:30:25.733 ************************************ 00:30:25.733 START TEST blockdev_crypto_qat 00:30:25.734 ************************************ 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:30:25.734 * Looking for test storage... 00:30:25.734 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70403 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 70403 00:30:25.734 03:24:56 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@827 -- # '[' -z 70403 ']' 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:25.734 03:24:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:25.734 [2024-05-15 03:24:56.838034] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:25.734 [2024-05-15 03:24:56.838092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:30:25.992 [2024-05-15 03:24:56.936731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.992 [2024-05-15 03:24:57.028253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.557 03:24:57 blockdev_crypto_qat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:26.557 03:24:57 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # return 0 00:30:26.557 03:24:57 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:30:26.557 03:24:57 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:30:26.557 03:24:57 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:30:26.557 03:24:57 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.557 03:24:57 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:26.557 [2024-05-15 03:24:57.706371] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:30:26.557 [2024-05-15 03:24:57.714409] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:26.815 [2024-05-15 03:24:57.722426] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:26.815 [2024-05-15 03:24:57.790293] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:30:29.346 true 00:30:29.346 true 00:30:29.346 true 00:30:29.346 true 00:30:29.346 Malloc0 00:30:29.346 Malloc1 00:30:29.346 Malloc2 00:30:29.346 Malloc3 00:30:29.346 [2024-05-15 03:25:00.230391] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:30:29.346 crypto_ram 00:30:29.346 [2024-05-15 03:25:00.238411] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:30:29.346 crypto_ram1 00:30:29.346 [2024-05-15 03:25:00.246435] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:30:29.346 crypto_ram2 00:30:29.346 [2024-05-15 03:25:00.254455] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:30:29.346 crypto_ram3 00:30:29.346 [ 00:30:29.346 { 00:30:29.346 "name": "Malloc1", 00:30:29.346 "aliases": [ 00:30:29.346 "a01cec2c-6ec3-4b2e-b446-8392b0dcc514" 00:30:29.346 ], 00:30:29.346 "product_name": "Malloc disk", 00:30:29.346 "block_size": 512, 00:30:29.346 "num_blocks": 65536, 00:30:29.346 "uuid": "a01cec2c-6ec3-4b2e-b446-8392b0dcc514", 00:30:29.346 "assigned_rate_limits": { 00:30:29.346 "rw_ios_per_sec": 0, 00:30:29.346 "rw_mbytes_per_sec": 0, 00:30:29.346 "r_mbytes_per_sec": 0, 00:30:29.346 "w_mbytes_per_sec": 0 00:30:29.346 }, 00:30:29.346 "claimed": true, 00:30:29.346 "claim_type": "exclusive_write", 00:30:29.346 "zoned": false, 00:30:29.346 "supported_io_types": { 00:30:29.346 "read": true, 00:30:29.346 "write": true, 00:30:29.346 "unmap": true, 00:30:29.346 "write_zeroes": true, 00:30:29.346 "flush": true, 00:30:29.346 "reset": true, 00:30:29.346 "compare": false, 00:30:29.346 "compare_and_write": false, 00:30:29.346 "abort": true, 00:30:29.346 "nvme_admin": false, 00:30:29.346 "nvme_io": false 00:30:29.346 }, 00:30:29.346 "memory_domains": [ 00:30:29.346 { 00:30:29.346 "dma_device_id": "system", 00:30:29.346 "dma_device_type": 1 00:30:29.346 }, 00:30:29.346 { 00:30:29.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.347 "dma_device_type": 2 00:30:29.347 } 00:30:29.347 ], 00:30:29.347 "driver_specific": {} 00:30:29.347 } 00:30:29.347 ] 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1dca22d4-1c92-5185-b0fb-5e961647b4e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1dca22d4-1c92-5185-b0fb-5e961647b4e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "82b64aba-060e-5c98-b46c-372151aa5366"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82b64aba-060e-5c98-b46c-372151aa5366",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e60e23ee-68a8-5cb4-a992-fd01abb99afd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e60e23ee-68a8-5cb4-a992-fd01abb99afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5d013d21-0d41-5159-832f-8b8dd028e67b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5d013d21-0d41-5159-832f-8b8dd028e67b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:30:29.347 03:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 70403 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@946 -- # '[' -z 70403 ']' 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # kill -0 70403 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@951 -- # uname 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70403 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70403' 00:30:29.347 killing process with pid 70403 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@965 -- # kill 70403 00:30:29.347 03:25:00 blockdev_crypto_qat -- common/autotest_common.sh@970 -- # wait 70403 00:30:29.914 03:25:01 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:29.914 03:25:01 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:29.914 03:25:01 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:30:29.914 03:25:01 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:29.914 03:25:01 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:29.914 ************************************ 00:30:29.914 START TEST bdev_hello_world 00:30:29.914 ************************************ 00:30:29.914 03:25:01 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:30.205 [2024-05-15 03:25:01.092727] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:30.205 [2024-05-15 03:25:01.092763] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:30:30.205 [2024-05-15 03:25:01.178661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.205 [2024-05-15 03:25:01.271731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.205 [2024-05-15 03:25:01.293038] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:30:30.205 [2024-05-15 03:25:01.301064] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:30.205 [2024-05-15 03:25:01.309081] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:30.462 [2024-05-15 03:25:01.422500] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:30:32.992 [2024-05-15 03:25:03.712112] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:30:32.992 [2024-05-15 03:25:03.712174] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:32.992 [2024-05-15 03:25:03.712187] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:32.992 [2024-05-15 03:25:03.720131] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:30:32.992 [2024-05-15 03:25:03.720148] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:32.992 [2024-05-15 03:25:03.720157] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:32.992 [2024-05-15 03:25:03.728149] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:30:32.992 [2024-05-15 03:25:03.728164] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:32.992 [2024-05-15 03:25:03.728172] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:32.992 [2024-05-15 03:25:03.736171] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:30:32.992 [2024-05-15 03:25:03.736186] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:32.992 [2024-05-15 03:25:03.736194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:32.992 [2024-05-15 03:25:03.808338] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:32.992 [2024-05-15 03:25:03.808381] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:30:32.992 [2024-05-15 03:25:03.808397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:32.992 [2024-05-15 03:25:03.809730] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:32.992 [2024-05-15 03:25:03.809807] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:32.992 [2024-05-15 03:25:03.809823] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:32.992 [2024-05-15 03:25:03.809876] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:32.992 00:30:32.992 [2024-05-15 03:25:03.809892] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:33.251 00:30:33.251 real 0m3.132s 00:30:33.251 user 0m2.712s 00:30:33.251 sys 0m0.370s 00:30:33.251 03:25:04 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:33.251 03:25:04 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:33.251 ************************************ 00:30:33.251 END TEST bdev_hello_world 00:30:33.251 ************************************ 00:30:33.251 03:25:04 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:30:33.251 03:25:04 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:33.252 03:25:04 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:33.252 03:25:04 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:33.252 ************************************ 00:30:33.252 START TEST bdev_bounds 00:30:33.252 ************************************ 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=71721 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 71721' 00:30:33.252 Process bdevio pid: 71721 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 71721 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 71721 ']' 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:33.252 03:25:04 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:33.252 [2024-05-15 03:25:04.305508] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:33.252 [2024-05-15 03:25:04.305563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:30:33.252 [2024-05-15 03:25:04.405258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:33.510 [2024-05-15 03:25:04.499315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.510 [2024-05-15 03:25:04.499410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.510 [2024-05-15 03:25:04.499414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.510 [2024-05-15 03:25:04.521037] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:30:33.510 [2024-05-15 03:25:04.529064] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:33.510 [2024-05-15 03:25:04.537084] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:33.510 [2024-05-15 03:25:04.641698] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:30:36.042 [2024-05-15 03:25:06.942812] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:30:36.043 [2024-05-15 03:25:06.942901] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:36.043 [2024-05-15 03:25:06.942914] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:36.043 [2024-05-15 03:25:06.950832] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:30:36.043 [2024-05-15 03:25:06.950857] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:36.043 [2024-05-15 03:25:06.950867] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:36.043 [2024-05-15 03:25:06.958861] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:30:36.043 [2024-05-15 03:25:06.958877] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:36.043 [2024-05-15 03:25:06.958885] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:36.043 [2024-05-15 03:25:06.966880] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:30:36.043 [2024-05-15 03:25:06.966895] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:36.043 [2024-05-15 03:25:06.966904] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:36.043 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:36.043 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:30:36.043 03:25:07 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:36.043 I/O targets: 00:30:36.043 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:30:36.043 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:30:36.043 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:30:36.043 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:30:36.043 00:30:36.043 00:30:36.043 CUnit - A unit testing framework for C - Version 2.1-3 00:30:36.043 http://cunit.sourceforge.net/ 00:30:36.043 00:30:36.043 00:30:36.043 Suite: bdevio tests on: crypto_ram3 00:30:36.043 Test: blockdev write read block ...passed 00:30:36.043 Test: blockdev write zeroes read block ...passed 00:30:36.043 Test: blockdev write zeroes read no split ...passed 00:30:36.043 Test: blockdev write zeroes read split ...passed 00:30:36.301 Test: blockdev write zeroes read split partial ...passed 00:30:36.301 Test: blockdev reset ...passed 00:30:36.301 Test: blockdev write read 8 blocks ...passed 00:30:36.301 Test: blockdev write read size > 128k ...passed 00:30:36.301 Test: blockdev write read invalid size ...passed 00:30:36.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:36.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:36.301 Test: blockdev write read max offset ...passed 00:30:36.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:36.301 Test: blockdev writev readv 8 blocks ...passed 00:30:36.301 Test: blockdev writev readv 30 x 1block ...passed 00:30:36.301 Test: blockdev writev readv block ...passed 00:30:36.301 Test: blockdev writev readv size > 128k ...passed 00:30:36.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:36.301 Test: blockdev comparev and writev ...passed 00:30:36.301 Test: blockdev nvme passthru rw ...passed 00:30:36.301 Test: blockdev nvme passthru vendor specific ...passed 00:30:36.301 Test: blockdev nvme admin passthru ...passed 00:30:36.301 Test: blockdev copy ...passed 00:30:36.301 Suite: bdevio tests on: crypto_ram2 00:30:36.301 Test: blockdev write read block ...passed 00:30:36.301 Test: blockdev write zeroes read block ...passed 00:30:36.301 Test: blockdev write zeroes read no split ...passed 00:30:36.301 Test: blockdev write zeroes read split ...passed 00:30:36.301 Test: blockdev write zeroes read split partial ...passed 00:30:36.301 Test: blockdev reset ...passed 00:30:36.301 Test: blockdev write read 8 blocks ...passed 00:30:36.301 Test: blockdev write read size > 128k ...passed 00:30:36.301 Test: blockdev write read invalid size ...passed 00:30:36.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:36.302 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:36.302 Test: blockdev write read max offset ...passed 00:30:36.302 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:36.302 Test: blockdev writev readv 8 blocks ...passed 00:30:36.302 Test: blockdev writev readv 30 x 1block ...passed 00:30:36.302 Test: blockdev writev readv block ...passed 00:30:36.302 Test: blockdev writev readv size > 128k ...passed 00:30:36.302 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:36.302 Test: blockdev comparev and writev ...passed 00:30:36.302 Test: blockdev nvme passthru rw ...passed 00:30:36.302 Test: blockdev nvme passthru vendor specific ...passed 00:30:36.302 Test: blockdev nvme admin passthru ...passed 00:30:36.302 Test: blockdev copy ...passed 00:30:36.302 Suite: bdevio tests on: crypto_ram1 00:30:36.302 Test: blockdev write read block ...passed 00:30:36.302 Test: blockdev write zeroes read block ...passed 00:30:36.302 Test: blockdev write zeroes read no split ...passed 00:30:36.302 Test: blockdev write zeroes read split ...passed 00:30:36.302 Test: blockdev write zeroes read split partial ...passed 00:30:36.302 Test: blockdev reset ...passed 00:30:36.302 Test: blockdev write read 8 blocks ...passed 00:30:36.302 Test: blockdev write read size > 128k ...passed 00:30:36.302 Test: blockdev write read invalid size ...passed 00:30:36.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:36.302 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:36.302 Test: blockdev write read max offset ...passed 00:30:36.302 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:36.302 Test: blockdev writev readv 8 blocks ...passed 00:30:36.302 Test: blockdev writev readv 30 x 1block ...passed 00:30:36.302 Test: blockdev writev readv block ...passed 00:30:36.302 Test: blockdev writev readv size > 128k ...passed 00:30:36.302 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:36.302 Test: blockdev comparev and writev ...passed 00:30:36.302 Test: blockdev nvme passthru rw ...passed 00:30:36.302 Test: blockdev nvme passthru vendor specific ...passed 00:30:36.302 Test: blockdev nvme admin passthru ...passed 00:30:36.302 Test: blockdev copy ...passed 00:30:36.302 Suite: bdevio tests on: crypto_ram 00:30:36.302 Test: blockdev write read block ...passed 00:30:36.302 Test: blockdev write zeroes read block ...passed 00:30:36.302 Test: blockdev write zeroes read no split ...passed 00:30:36.302 Test: blockdev write zeroes read split ...passed 00:30:36.302 Test: blockdev write zeroes read split partial ...passed 00:30:36.302 Test: blockdev reset ...passed 00:30:36.302 Test: blockdev write read 8 blocks ...passed 00:30:36.302 Test: blockdev write read size > 128k ...passed 00:30:36.302 Test: blockdev write read invalid size ...passed 00:30:36.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:36.302 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:36.302 Test: blockdev write read max offset ...passed 00:30:36.302 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:36.302 Test: blockdev writev readv 8 blocks ...passed 00:30:36.302 Test: blockdev writev readv 30 x 1block ...passed 00:30:36.302 Test: blockdev writev readv block ...passed 00:30:36.302 Test: blockdev writev readv size > 128k ...passed 00:30:36.302 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:36.302 Test: blockdev comparev and writev ...passed 00:30:36.302 Test: blockdev nvme passthru rw ...passed 00:30:36.302 Test: blockdev nvme passthru vendor specific ...passed 00:30:36.302 Test: blockdev nvme admin passthru ...passed 00:30:36.302 Test: blockdev copy ...passed 00:30:36.302 00:30:36.302 Run Summary: Type Total Ran Passed Failed Inactive 00:30:36.302 suites 4 4 n/a 0 0 00:30:36.302 tests 92 92 92 0 0 00:30:36.302 asserts 520 520 520 0 n/a 00:30:36.302 00:30:36.302 Elapsed time = 0.518 seconds 00:30:36.302 0 00:30:36.302 03:25:07 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 71721 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 71721 ']' 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 71721 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71721 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71721' 00:30:36.561 killing process with pid 71721 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@965 -- # kill 71721 00:30:36.561 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@970 -- # wait 71721 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:30:36.821 00:30:36.821 real 0m3.629s 00:30:36.821 user 0m10.119s 00:30:36.821 sys 0m0.559s 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:36.821 ************************************ 00:30:36.821 END TEST bdev_bounds 00:30:36.821 ************************************ 00:30:36.821 03:25:07 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:30:36.821 03:25:07 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:36.821 03:25:07 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:36.821 03:25:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:36.821 ************************************ 00:30:36.821 START TEST bdev_nbd 00:30:36.821 ************************************ 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=72257 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 72257 /var/tmp/spdk-nbd.sock 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 72257 ']' 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:36.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:36.821 03:25:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:37.081 [2024-05-15 03:25:08.011412] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:30:37.081 [2024-05-15 03:25:08.011464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.081 [2024-05-15 03:25:08.100162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.081 [2024-05-15 03:25:08.193341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.081 [2024-05-15 03:25:08.214653] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:30:37.081 [2024-05-15 03:25:08.222675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:37.081 [2024-05-15 03:25:08.230694] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:37.339 [2024-05-15 03:25:08.334012] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:30:39.872 [2024-05-15 03:25:10.612492] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:30:39.872 [2024-05-15 03:25:10.612551] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:39.872 [2024-05-15 03:25:10.612562] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:39.872 [2024-05-15 03:25:10.620512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:30:39.872 [2024-05-15 03:25:10.620529] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:39.872 [2024-05-15 03:25:10.620538] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:39.872 [2024-05-15 03:25:10.628530] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:30:39.872 [2024-05-15 03:25:10.628549] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:39.872 [2024-05-15 03:25:10.628557] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:39.872 [2024-05-15 03:25:10.636552] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:30:39.872 [2024-05-15 03:25:10.636567] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:39.872 [2024-05-15 03:25:10.636575] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:30:39.872 03:25:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:39.872 1+0 records in 00:30:39.872 1+0 records out 00:30:39.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258995 s, 15.8 MB/s 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:39.872 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:40.131 1+0 records in 00:30:40.131 1+0 records out 00:30:40.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266256 s, 15.4 MB/s 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:40.131 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:40.389 1+0 records in 00:30:40.389 1+0 records out 00:30:40.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167673 s, 24.4 MB/s 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:40.389 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.390 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:40.390 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:40.390 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:40.390 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:40.390 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:40.648 1+0 records in 00:30:40.648 1+0 records out 00:30:40.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277697 s, 14.7 MB/s 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:40.648 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:40.907 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:40.907 03:25:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:40.907 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:40.907 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:30:40.907 03:25:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd0", 00:30:41.165 "bdev_name": "crypto_ram" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd1", 00:30:41.165 "bdev_name": "crypto_ram1" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd2", 00:30:41.165 "bdev_name": "crypto_ram2" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd3", 00:30:41.165 "bdev_name": "crypto_ram3" 00:30:41.165 } 00:30:41.165 ]' 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd0", 00:30:41.165 "bdev_name": "crypto_ram" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd1", 00:30:41.165 "bdev_name": "crypto_ram1" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd2", 00:30:41.165 "bdev_name": "crypto_ram2" 00:30:41.165 }, 00:30:41.165 { 00:30:41.165 "nbd_device": "/dev/nbd3", 00:30:41.165 "bdev_name": "crypto_ram3" 00:30:41.165 } 00:30:41.165 ]' 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:41.165 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:41.424 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:41.683 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:41.941 03:25:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.199 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:42.455 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:30:42.712 /dev/nbd0 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.712 1+0 records in 00:30:42.712 1+0 records out 00:30:42.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261664 s, 15.7 MB/s 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:42.712 03:25:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:30:42.970 /dev/nbd1 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:42.970 1+0 records in 00:30:42.970 1+0 records out 00:30:42.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284366 s, 14.4 MB/s 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:42.970 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:30:43.230 /dev/nbd10 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:43.230 1+0 records in 00:30:43.230 1+0 records out 00:30:43.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249133 s, 16.4 MB/s 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:43.230 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:30:43.489 /dev/nbd11 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:43.489 1+0 records in 00:30:43.489 1+0 records out 00:30:43.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203892 s, 20.1 MB/s 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:30:43.489 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:43.747 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:43.748 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd0", 00:30:44.006 "bdev_name": "crypto_ram" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd1", 00:30:44.006 "bdev_name": "crypto_ram1" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd10", 00:30:44.006 "bdev_name": "crypto_ram2" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd11", 00:30:44.006 "bdev_name": "crypto_ram3" 00:30:44.006 } 00:30:44.006 ]' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd0", 00:30:44.006 "bdev_name": "crypto_ram" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd1", 00:30:44.006 "bdev_name": "crypto_ram1" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd10", 00:30:44.006 "bdev_name": "crypto_ram2" 00:30:44.006 }, 00:30:44.006 { 00:30:44.006 "nbd_device": "/dev/nbd11", 00:30:44.006 "bdev_name": "crypto_ram3" 00:30:44.006 } 00:30:44.006 ]' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:44.006 /dev/nbd1 00:30:44.006 /dev/nbd10 00:30:44.006 /dev/nbd11' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:44.006 /dev/nbd1 00:30:44.006 /dev/nbd10 00:30:44.006 /dev/nbd11' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:44.006 256+0 records in 00:30:44.006 256+0 records out 00:30:44.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995171 s, 105 MB/s 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.006 03:25:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:44.006 256+0 records in 00:30:44.006 256+0 records out 00:30:44.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341311 s, 30.7 MB/s 00:30:44.006 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.006 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:44.006 256+0 records in 00:30:44.006 256+0 records out 00:30:44.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297618 s, 35.2 MB/s 00:30:44.006 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.006 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:44.007 256+0 records in 00:30:44.007 256+0 records out 00:30:44.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024835 s, 42.2 MB/s 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:44.007 256+0 records in 00:30:44.007 256+0 records out 00:30:44.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233817 s, 44.8 MB/s 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.007 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.265 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.523 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:44.781 03:25:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:45.040 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:45.298 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:45.298 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:45.298 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:45.556 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:45.813 malloc_lvol_verify 00:30:45.813 03:25:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:46.071 fd6c1981-df20-42fd-9d7c-46f72171a856 00:30:46.071 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:46.330 9ee8d3d3-32bf-4b22-8af0-554631225437 00:30:46.330 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:46.600 /dev/nbd0 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:46.600 mke2fs 1.46.5 (30-Dec-2021) 00:30:46.600 Discarding device blocks: 0/4096 done 00:30:46.600 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:46.600 00:30:46.600 Allocating group tables: 0/1 done 00:30:46.600 Writing inode tables: 0/1 done 00:30:46.600 Creating journal (1024 blocks): done 00:30:46.600 Writing superblocks and filesystem accounting information: 0/1 done 00:30:46.600 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:46.600 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 72257 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 72257 ']' 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 72257 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72257 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72257' 00:30:46.892 killing process with pid 72257 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@965 -- # kill 72257 00:30:46.892 03:25:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@970 -- # wait 72257 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:30:47.155 00:30:47.155 real 0m10.250s 00:30:47.155 user 0m14.396s 00:30:47.155 sys 0m3.266s 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:47.155 ************************************ 00:30:47.155 END TEST bdev_nbd 00:30:47.155 ************************************ 00:30:47.155 03:25:18 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:30:47.155 03:25:18 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:30:47.155 03:25:18 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:30:47.155 03:25:18 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:30:47.155 03:25:18 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:47.155 03:25:18 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:47.155 03:25:18 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:30:47.155 ************************************ 00:30:47.155 START TEST bdev_fio 00:30:47.155 ************************************ 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:47.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:47.155 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:30:47.156 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:47.415 ************************************ 00:30:47.415 START TEST bdev_fio_rw_verify 00:30:47.415 ************************************ 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:47.415 03:25:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:47.673 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:47.673 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:47.673 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:47.673 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:47.673 fio-3.35 00:30:47.673 Starting 4 threads 00:31:02.627 00:31:02.627 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=74784: Wed May 15 03:25:31 2024 00:31:02.627 read: IOPS=19.8k, BW=77.3MiB/s (81.0MB/s)(773MiB/10001msec) 00:31:02.627 slat (usec): min=19, max=455, avg=69.40, stdev=31.13 00:31:02.627 clat (usec): min=17, max=1321, avg=376.24, stdev=201.58 00:31:02.627 lat (usec): min=72, max=1457, avg=445.65, stdev=210.45 00:31:02.627 clat percentiles (usec): 00:31:02.627 | 50.000th=[ 338], 99.000th=[ 889], 99.900th=[ 1037], 99.990th=[ 1139], 00:31:02.627 | 99.999th=[ 1254] 00:31:02.627 write: IOPS=21.8k, BW=85.3MiB/s (89.4MB/s)(830MiB/9736msec); 0 zone resets 00:31:02.627 slat (usec): min=27, max=1182, avg=82.94, stdev=30.33 00:31:02.627 clat (usec): min=24, max=2305, avg=424.61, stdev=219.84 00:31:02.627 lat (usec): min=78, max=2382, avg=507.55, stdev=227.99 00:31:02.627 clat percentiles (usec): 00:31:02.627 | 50.000th=[ 396], 99.000th=[ 979], 99.900th=[ 1139], 99.990th=[ 1565], 00:31:02.627 | 99.999th=[ 1991] 00:31:02.627 bw ( KiB/s): min=70112, max=115352, per=97.78%, avg=85394.11, stdev=2681.77, samples=76 00:31:02.627 iops : min=17528, max=28838, avg=21348.53, stdev=670.44, samples=76 00:31:02.627 lat (usec) : 20=0.01%, 50=0.01%, 100=2.03%, 250=27.18%, 500=40.21% 00:31:02.627 lat (usec) : 750=23.40%, 1000=6.69% 00:31:02.627 lat (msec) : 2=0.49%, 4=0.01% 00:31:02.627 cpu : usr=99.62%, sys=0.00%, ctx=71, majf=0, minf=252 00:31:02.627 IO depths : 1=7.1%, 2=26.6%, 4=53.1%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.627 complete : 0=0.0%, 4=88.3%, 8=11.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.627 issued rwts: total=197842,212571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.627 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:02.627 00:31:02.627 Run status group 0 (all jobs): 00:31:02.627 READ: bw=77.3MiB/s (81.0MB/s), 77.3MiB/s-77.3MiB/s (81.0MB/s-81.0MB/s), io=773MiB (810MB), run=10001-10001msec 00:31:02.627 WRITE: bw=85.3MiB/s (89.4MB/s), 85.3MiB/s-85.3MiB/s (89.4MB/s-89.4MB/s), io=830MiB (871MB), run=9736-9736msec 00:31:02.627 00:31:02.627 real 0m13.542s 00:31:02.627 user 0m50.618s 00:31:02.627 sys 0m0.519s 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:02.627 ************************************ 00:31:02.627 END TEST bdev_fio_rw_verify 00:31:02.627 ************************************ 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:02.627 03:25:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1dca22d4-1c92-5185-b0fb-5e961647b4e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1dca22d4-1c92-5185-b0fb-5e961647b4e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "82b64aba-060e-5c98-b46c-372151aa5366"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82b64aba-060e-5c98-b46c-372151aa5366",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e60e23ee-68a8-5cb4-a992-fd01abb99afd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e60e23ee-68a8-5cb4-a992-fd01abb99afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5d013d21-0d41-5159-832f-8b8dd028e67b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5d013d21-0d41-5159-832f-8b8dd028e67b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:31:02.627 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:31:02.627 crypto_ram1 00:31:02.627 crypto_ram2 00:31:02.627 crypto_ram3 ]] 00:31:02.627 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1dca22d4-1c92-5185-b0fb-5e961647b4e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1dca22d4-1c92-5185-b0fb-5e961647b4e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "82b64aba-060e-5c98-b46c-372151aa5366"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "82b64aba-060e-5c98-b46c-372151aa5366",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e60e23ee-68a8-5cb4-a992-fd01abb99afd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e60e23ee-68a8-5cb4-a992-fd01abb99afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5d013d21-0d41-5159-832f-8b8dd028e67b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5d013d21-0d41-5159-832f-8b8dd028e67b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:02.628 ************************************ 00:31:02.628 START TEST bdev_fio_trim 00:31:02.628 ************************************ 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:02.628 03:25:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:02.628 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:02.628 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:02.628 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:02.628 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:02.628 fio-3.35 00:31:02.628 Starting 4 threads 00:31:14.830 00:31:14.830 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=77001: Wed May 15 03:25:45 2024 00:31:14.830 write: IOPS=34.9k, BW=136MiB/s (143MB/s)(1362MiB/10001msec); 0 zone resets 00:31:14.830 slat (usec): min=18, max=381, avg=65.92, stdev=27.36 00:31:14.830 clat (usec): min=35, max=1544, avg=240.29, stdev=114.41 00:31:14.830 lat (usec): min=54, max=1628, avg=306.21, stdev=124.17 00:31:14.830 clat percentiles (usec): 00:31:14.830 | 50.000th=[ 227], 99.000th=[ 523], 99.900th=[ 635], 99.990th=[ 742], 00:31:14.830 | 99.999th=[ 1270] 00:31:14.830 bw ( KiB/s): min=130720, max=195392, per=100.00%, avg=139787.79, stdev=5645.15, samples=76 00:31:14.830 iops : min=32680, max=48848, avg=34946.84, stdev=1411.29, samples=76 00:31:14.830 trim: IOPS=34.9k, BW=136MiB/s (143MB/s)(1362MiB/10001msec); 0 zone resets 00:31:14.830 slat (nsec): min=6188, max=66207, avg=19363.29, stdev=7682.54 00:31:14.830 clat (usec): min=54, max=1628, avg=306.45, stdev=124.17 00:31:14.830 lat (usec): min=61, max=1647, avg=325.82, stdev=126.05 00:31:14.830 clat percentiles (usec): 00:31:14.830 | 50.000th=[ 293], 99.000th=[ 619], 99.900th=[ 725], 99.990th=[ 898], 00:31:14.830 | 99.999th=[ 1352] 00:31:14.830 bw ( KiB/s): min=130744, max=195392, per=100.00%, avg=139787.37, stdev=5645.12, samples=76 00:31:14.830 iops : min=32688, max=48848, avg=34946.95, stdev=1411.27, samples=76 00:31:14.830 lat (usec) : 50=0.01%, 100=5.71%, 250=41.83%, 500=48.06%, 750=4.37% 00:31:14.830 lat (usec) : 1000=0.03% 00:31:14.830 lat (msec) : 2=0.01% 00:31:14.830 cpu : usr=99.62%, sys=0.00%, ctx=65, majf=0, minf=107 00:31:14.830 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.830 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.830 issued rwts: total=0,348614,348614,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:14.830 00:31:14.830 Run status group 0 (all jobs): 00:31:14.830 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=1362MiB (1428MB), run=10001-10001msec 00:31:14.830 TRIM: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=1362MiB (1428MB), run=10001-10001msec 00:31:14.830 00:31:14.830 real 0m13.568s 00:31:14.830 user 0m50.700s 00:31:14.830 sys 0m0.516s 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:31:14.830 ************************************ 00:31:14.830 END TEST bdev_fio_trim 00:31:14.830 ************************************ 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:31:14.830 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:31:14.830 00:31:14.830 real 0m27.443s 00:31:14.830 user 1m41.506s 00:31:14.830 sys 0m1.192s 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:14.830 ************************************ 00:31:14.830 END TEST bdev_fio 00:31:14.830 ************************************ 00:31:14.830 03:25:45 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:14.830 03:25:45 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:14.830 03:25:45 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:31:14.830 03:25:45 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:14.830 03:25:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:14.830 ************************************ 00:31:14.830 START TEST bdev_verify 00:31:14.830 ************************************ 00:31:14.830 03:25:45 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:14.830 [2024-05-15 03:25:45.842917] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:14.830 [2024-05-15 03:25:45.842969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78719 ] 00:31:14.830 [2024-05-15 03:25:45.941620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.087 [2024-05-15 03:25:46.037606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.087 [2024-05-15 03:25:46.037612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.087 [2024-05-15 03:25:46.058989] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:15.087 [2024-05-15 03:25:46.067024] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:15.087 [2024-05-15 03:25:46.075044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:15.087 [2024-05-15 03:25:46.174366] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:17.615 [2024-05-15 03:25:48.451688] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:17.615 [2024-05-15 03:25:48.451755] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:17.615 [2024-05-15 03:25:48.451768] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.615 [2024-05-15 03:25:48.459704] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:17.615 [2024-05-15 03:25:48.459722] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:17.615 [2024-05-15 03:25:48.459731] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.615 [2024-05-15 03:25:48.467728] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:17.615 [2024-05-15 03:25:48.467750] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:17.615 [2024-05-15 03:25:48.467759] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.615 [2024-05-15 03:25:48.475749] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:17.615 [2024-05-15 03:25:48.475766] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:17.615 [2024-05-15 03:25:48.475774] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.615 Running I/O for 5 seconds... 00:31:22.883 00:31:22.883 Latency(us) 00:31:22.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.883 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x0 length 0x1000 00:31:22.883 crypto_ram : 5.07 443.12 1.73 0.00 0.00 287393.48 4525.10 176759.95 00:31:22.883 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x1000 length 0x1000 00:31:22.883 crypto_ram : 5.08 448.94 1.75 0.00 0.00 283915.76 4774.77 175761.31 00:31:22.883 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x0 length 0x1000 00:31:22.883 crypto_ram1 : 5.08 447.63 1.75 0.00 0.00 284165.61 3963.37 176759.95 00:31:22.883 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x1000 length 0x1000 00:31:22.883 crypto_ram1 : 5.08 453.45 1.77 0.00 0.00 280815.05 4649.94 176759.95 00:31:22.883 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x0 length 0x1000 00:31:22.883 crypto_ram2 : 5.06 3443.64 13.45 0.00 0.00 36838.28 5024.43 33204.91 00:31:22.883 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x1000 length 0x1000 00:31:22.883 crypto_ram2 : 5.06 3465.78 13.54 0.00 0.00 36598.22 5835.82 33454.57 00:31:22.883 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x0 length 0x1000 00:31:22.883 crypto_ram3 : 5.06 3442.49 13.45 0.00 0.00 36727.93 4306.65 33454.57 00:31:22.883 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:22.883 Verification LBA range: start 0x1000 length 0x1000 00:31:22.883 crypto_ram3 : 5.06 3464.63 13.53 0.00 0.00 36493.73 5523.75 33454.57 00:31:22.883 =================================================================================================================== 00:31:22.883 Total : 15609.68 60.98 0.00 0.00 65176.11 3963.37 176759.95 00:31:23.142 00:31:23.142 real 0m8.256s 00:31:23.142 user 0m15.680s 00:31:23.142 sys 0m0.384s 00:31:23.142 03:25:54 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:23.142 03:25:54 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:23.142 ************************************ 00:31:23.142 END TEST bdev_verify 00:31:23.142 ************************************ 00:31:23.142 03:25:54 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:23.142 03:25:54 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:31:23.142 03:25:54 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:23.142 03:25:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:23.142 ************************************ 00:31:23.142 START TEST bdev_verify_big_io 00:31:23.142 ************************************ 00:31:23.142 03:25:54 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:23.142 [2024-05-15 03:25:54.166296] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:23.142 [2024-05-15 03:25:54.166348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79977 ] 00:31:23.142 [2024-05-15 03:25:54.263447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.401 [2024-05-15 03:25:54.355449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.401 [2024-05-15 03:25:54.355455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.401 [2024-05-15 03:25:54.376935] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:23.401 [2024-05-15 03:25:54.384966] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:23.401 [2024-05-15 03:25:54.392989] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:23.401 [2024-05-15 03:25:54.496059] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:25.991 [2024-05-15 03:25:56.779493] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:25.991 [2024-05-15 03:25:56.779559] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:25.991 [2024-05-15 03:25:56.779572] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:25.991 [2024-05-15 03:25:56.787511] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:25.991 [2024-05-15 03:25:56.787527] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:25.991 [2024-05-15 03:25:56.787535] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:25.991 [2024-05-15 03:25:56.795535] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:25.991 [2024-05-15 03:25:56.795550] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:25.991 [2024-05-15 03:25:56.795558] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:25.991 [2024-05-15 03:25:56.803557] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:25.991 [2024-05-15 03:25:56.803572] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:25.991 [2024-05-15 03:25:56.803580] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:25.991 Running I/O for 5 seconds... 00:31:26.931 [2024-05-15 03:25:57.778355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.778805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.778880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.778928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.778970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.779011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.779482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.779497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.783425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.783475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.783522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.783564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.784649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.788986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.789028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.789068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.789548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.789562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.793923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.794389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.794403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.797844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.797913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.797959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.797999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.798537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.798580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.798621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.798663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.799132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.799147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.802722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.802769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.802810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.802856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.803938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.807578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.807623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.807664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.807708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.808790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.812307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.812353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.812396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.812442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.812949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.931 [2024-05-15 03:25:57.813016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.813071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.813123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.813604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.813617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.817814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.818197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.818211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.822973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.823016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.823056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.823495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.823509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.826965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.827764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.828270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.828285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.831802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.831865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.831906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.831947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.832450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.832493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.832534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.832575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.833030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.833046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.836422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.836481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.836522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.836563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.837697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.840957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.841746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.842177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.842191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.845463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.845508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.845549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.845590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.846778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.850183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.850230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.850270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.850312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.932 [2024-05-15 03:25:57.850790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.850833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.850878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.850918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.851390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.851405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.854688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.854733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.854791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.854844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.855422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.855470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.855511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.855552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.856014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.856030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.859334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.859378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.859422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.859464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.859982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.860026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.860067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.860108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.860516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.860530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.863776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.863821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.863867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.863908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.864960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.868979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.869363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.869377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.872584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.872632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.872672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.872715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.873654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.877968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.878363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.878377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.881735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.881782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.933 [2024-05-15 03:25:57.881822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.881868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.882946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.886863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.887347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.887361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.890392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.890438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.890480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.890533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.891735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.894808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.894857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.894898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.894940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.895992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.899798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.900319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.900334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.903472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.903518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.903561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.903602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.904713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.907947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.907992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.908033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.908073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.934 [2024-05-15 03:25:57.908574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.908618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.908658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.908704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.909138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.909152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.911692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.912021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.912035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.914985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.915025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.915065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.915551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.915565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.917681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.917756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.917797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.917826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.918654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.920981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.921408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.921828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.922252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.924164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.925923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.927665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.929392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.929802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.929815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.932063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.932486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.932909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.933330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.935503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.937426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.939387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.941269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.941655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.941668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.943918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.944344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.944763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.945188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.947489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.949291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.951132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.953006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.953526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.935 [2024-05-15 03:25:57.953539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.955781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.956210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.956628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.957051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.959392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.961031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.962773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.964516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.965059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.965073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.967505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.967932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.968351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.968773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.970862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.972310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.974036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.975794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.976212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.976227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.978949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.979376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.979795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.980219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.982143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.983596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.985292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.987011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.987371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.987390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.990412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.990845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.991268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.991685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.993306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.994769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.996508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.998238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.998530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:57.998544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.001906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.002334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.002758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.003185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.004598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.006079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.007825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.009584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.009882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.009896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.013400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.013824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.014247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.014666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.015899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.017357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.019093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.020832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.021128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.936 [2024-05-15 03:25:58.021146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.024960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.025385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.025804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.026227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.027301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.028830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.030579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.032320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.032613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.032627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.036397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.036829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.037251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.037670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.038575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.040330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.042247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.044212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.044506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.044520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.048162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.048704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.049133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.049552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.050470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.052364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.054113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.055907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.056200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.056214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.059847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.060526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.060951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.061369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.062290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.064138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.065751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.067487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.067779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.067792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.071405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.072321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.072743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.073165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.074084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.937 [2024-05-15 03:25:58.075768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.077214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.078942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.079233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.079247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.082891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.083982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.084423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.084842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:26.938 [2024-05-15 03:25:58.085734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.087265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.088742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.090475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.090768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.090781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.094418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.199 [2024-05-15 03:25:58.095677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.096107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.096528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.097448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.098892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.100352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.102084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.102375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.102389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.106015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.107352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.107775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.108197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.109059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.110345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.111816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.113549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.113841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.113861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.117449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.119046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.119468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.119892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.120740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.121838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.123302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.125029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.125322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.125337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.129002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.130760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.131186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.131606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.132488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.133505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.134973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.136715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.137012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.137026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.140805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.142661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.143092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.143516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.144365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.145314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.146770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.148507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.148799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.148813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.152545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.154471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.154909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.155331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.156166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.157008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.158462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.160169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.160461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.160475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.164035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.165786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.166311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.166728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.167588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.168100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.169745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.171645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.171945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.171959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.175518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.177237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.177964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.178387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.179300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.179722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.181516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.183364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.183656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.183670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.187206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.188940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.189885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.190309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.191191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.191612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.193532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.195300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.195594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.195608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.199135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.200881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.201843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.202271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.203160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.203584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.200 [2024-05-15 03:25:58.205528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.207200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.207491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.207505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.210996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.212738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.213903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.214329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.215223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.215645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.217436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.218989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.219283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.219297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.222749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.224479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.225788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.226212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.227100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.227522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.229155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.230623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.230920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.230934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.234402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.236143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.237606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.238038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.238940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.239358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.240938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.242399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.242693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.242707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.246187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.247926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.249503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.249931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.250824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.251252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.252451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.253905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.254199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.254212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.257698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.259494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.261335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.261764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.262657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.263081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.264028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.265506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.265798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.265811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.269294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.270171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.270596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.270643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.271553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.271978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.272406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.272838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.273305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.273319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.276221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.276645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.277067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.277487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.277534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.277926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.278361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.278785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.279207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.279631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.280121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.280136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.282716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.282780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.282822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.282867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.283993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.284008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.286534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.286588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.286629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.286670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.201 [2024-05-15 03:25:58.287735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.287749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.290935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.291006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.291060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.291103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.291527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.291541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.294802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.295256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.295271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.297973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.298766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.299267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.299281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.301792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.301837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.301882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.301923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.302964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.305498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.305554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.305596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.305651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.306706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.309928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.310401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.310415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.313807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.314265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.314279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.316709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.316754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.316795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.316838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.317323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.317372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.317413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.317454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.317502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.318052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.318067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.320580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.320625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.320665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.320706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.321121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.202 [2024-05-15 03:25:58.321170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.321211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.321251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.321291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.321765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.321779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.324981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.325021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.325485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.325500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.328976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.329384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.329398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.331939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.331986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.332729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.333118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.333133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.335692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.335737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.335778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.335819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.336880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.339512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.339557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.339602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.339645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.340809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.343578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.343658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.343709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.343751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.344872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.347446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.347491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.347531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.347582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.347998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.348705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.203 [2024-05-15 03:25:58.351264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.351988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.352029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.352508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.352523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.354927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.354973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.204 [2024-05-15 03:25:58.355761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.356197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.356213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.358917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.358963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.359688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.360171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.360186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.362612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.362659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.362705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.362746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.363923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.366514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.366560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.366601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.366641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.367737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.370993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.466 [2024-05-15 03:25:58.371035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.371076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.371116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.371572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.371586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.374841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.375261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.375276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.377769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.377814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.377860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.377902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.378841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.381916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.382391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.382405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.384738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.385059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.385073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.386740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.386783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.386829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.386875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.387991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.390729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.391017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.391031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.392845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.392892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.394632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.394678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.394969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.467 [2024-05-15 03:25:58.395617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.398014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.398057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.398096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.399764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.400540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.404067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.405320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.405744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.406168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.406684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.407116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.408977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.410604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.412351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.412640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.412654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.416178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.417527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.417953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.418373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.418911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.419337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.421109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.422635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.424372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.424662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.424675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.428209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.429622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.430049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.430468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.430985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.431413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.433103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.434562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.436308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.436598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.436612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.440128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.441659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.442083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.442503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.442964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.443391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.444959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.446435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.448188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.448478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.448492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.452009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.453609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.454034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.454456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.454927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.455353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.456835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.458302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.460019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.460309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.460322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.463884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.465624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.466059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.466479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.466924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.467349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.468753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.470212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.471955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.472243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.472256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.475878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.468 [2024-05-15 03:25:58.477654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.478079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.478499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.478931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.479360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.480656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.482118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.483861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.484149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.484163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.487799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.489742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.490168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.490590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.491043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.491473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.492627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.494070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.495822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.496119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.496133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.499678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.501527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.501968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.502390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.502835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.503267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.504322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.505795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.507517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.507808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.507822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.511333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.513069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.513521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.513947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.514389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.514816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.515683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.517134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.518874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.519164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.519178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.522680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.524418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.525094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.525514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.525985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.526413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.527080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.528574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.530328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.530620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.530637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.534138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.535884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.536703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.537131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.537612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.538062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.538608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.540222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.542062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.542355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.542370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.545889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.547626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.548548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.548974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.549438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.549871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.550288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.552086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.553958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.554250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.554264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.557667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.469 [2024-05-15 03:25:58.559354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.560559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.560990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.561446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.561878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.562303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.564261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.565993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.566285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.566299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.569764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.571501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.572731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.573160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.573647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.574081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.574501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.576416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.578101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.578392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.578406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.581860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.583615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.585011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.585434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.585912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.586337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.586759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.588460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.589931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.590223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.590237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.593702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.595436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.596942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.597368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.597841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.598272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.598692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.600254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.601718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.602015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.602029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.605506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.607244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.608944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.609370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.609833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.610266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.610686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.612083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.613544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.613836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.613854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.617526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.619352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.621152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.621577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.622030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.470 [2024-05-15 03:25:58.622461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.622886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.624235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.625702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.626001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.626016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.629684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.631492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.633261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.633683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.634162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.634590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.635014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.636345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.637818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.638117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.638131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.641882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.643753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.645582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.646009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.646482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.646914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.647334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.648595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.650058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.650350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.650364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.653930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.655785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.657658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.658094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.658563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.658994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.659415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.660463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.661935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.662230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.662245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.665544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.667290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.669027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.669539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.670053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.733 [2024-05-15 03:25:58.670483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.670908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.671745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.673203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.673494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.673507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.676710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.678457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.680193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.680904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.681431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.681863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.682287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.682884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.684440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.684731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.684745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.687915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.689664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.691398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.692339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.692870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.693300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.693730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.694198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.695900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.696194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.696207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.699387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.701085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.702816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.703966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.704473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.704910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.705332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.705749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.707659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.707960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.707974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.711130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.712872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.714615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.715881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.716313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.716742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.717168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.717588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.719403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.719756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.719770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.722929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.724685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.726417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.727838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.728326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.728758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.729182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.729606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.731336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.731669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.731683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.734856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.736611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.738353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.739877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.740295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.740721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.741146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.741568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.742676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.742973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.742988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.746606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.748357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.748931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.749373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.749869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.750298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.750721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.751157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.751581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.752071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.752088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.754956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.755380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.755431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.755856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.756335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.756773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.734 [2024-05-15 03:25:58.757208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.757632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.758055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.758529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.758543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.761447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.761881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.762305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.762376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.762831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.763271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.763695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.764116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.764539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.765009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.765024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.767509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.767554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.767594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.767635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.768734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.771980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.772034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.772090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.772543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.772557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.775961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.776418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.776433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.779981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.780460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.780474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.783701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.784218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.735 [2024-05-15 03:25:58.784233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.786724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.786782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.786833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.786880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.787403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.787482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.787525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.787565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.787605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.788091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.788107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.790585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.790642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.790683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.790725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.791869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.794318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.794363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.794422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.794462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.794979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.795605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.798944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.799409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.799423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.801878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.801926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.801967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.802662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.803168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.803184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.805721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.805766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.805807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.805847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.806932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.809413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.809462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.809507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.809552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.736 [2024-05-15 03:25:58.810090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.810720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.813991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.814032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.814411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.814425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.816926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.816972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.817762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.818241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.818255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.820759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.820806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.820846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.820894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.821892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.824976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.825030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.825070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.825125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.825526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.825540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.828969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.829433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.829447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.831888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.831933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.831975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.737 [2024-05-15 03:25:58.832652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.833146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.833161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.835673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.835719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.835777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.835846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.836984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.839512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.839571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.839647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.839690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.840846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.843961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.844002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.844043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.844504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.844518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.847963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.848003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.848449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.848463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.850863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.850910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.850951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.850992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.851466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.851515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.851557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.851614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.851655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.852146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.852162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.738 [2024-05-15 03:25:58.854908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.854948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.854989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.855480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.855495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.857984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.858679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.859162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.859177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.861950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.862237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.862251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.864985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.867612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.867658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.867703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.867748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.868518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.739 [2024-05-15 03:25:58.870845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.870892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.870932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.870975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.871263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.871277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.873971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.874012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.874052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.874093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.874560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.874574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.876295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.876347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.878794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.879154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.879168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.880925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.880971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.881980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.882020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.882513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.882528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.886110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.887292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:27.740 [2024-05-15 03:25:58.888775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.890502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.890797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.892591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.893021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.893442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.893868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.894329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.894344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.897788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.898618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.900080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.901821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.902121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.903872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.904408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.904827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.905250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.905774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.905791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.909321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.910115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.911575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.913316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.913614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.915353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.915928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.916350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.916772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.917272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.917288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.920788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.921779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.923646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.925412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.925706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.927454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.928402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.928823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.929246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.929743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.929758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.933344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.934309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.936155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.938005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.938298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.940049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.940940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.941367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.941787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.942316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.942333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.945982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.947229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.949012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.950560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.950861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.952617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.953758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.954185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.954605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.955118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.955133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.958733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.959813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.961781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.963474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.963768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.965529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.001 [2024-05-15 03:25:58.966503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.966938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.967360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.967871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.967887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.971514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.972832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.974540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.975983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.976276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.978036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.979338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.979759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.980185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.980658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.980678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.984342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.985560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.987369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.988930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.989224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.990982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.992147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.992576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.992999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.993498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.993511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.997185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:58.998671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.000195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.001667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.001962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.003723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.005160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.005585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.006006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.006466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.006480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.010158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.011565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.013168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.014619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.014918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.016678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.018027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.018455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.018879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.019307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.019321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.022973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.024648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.025993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.027462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.027755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.029524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.031140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.031560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.031988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.032442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.032456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.036167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.037730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.039181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.040647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.040944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.042700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.044213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.044635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.045060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.045470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.045484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.049400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.051281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.052441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.053912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.054206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.056041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.057810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.058242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.058662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.059113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.059127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.062910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.064550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.065921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.067384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.067681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.069449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.071003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.071427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.071853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.072263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.072277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.076198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.078001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.079251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.002 [2024-05-15 03:25:59.080724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.081019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.082783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.084491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.084919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.085344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.085835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.085852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.089542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.091135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.092593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.094063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.094357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.096120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.097649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.098074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.098496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.098906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.098920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.102860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.104730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.105881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.107340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.107633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.109495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.111306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.111726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.112149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.112559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.112573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.116417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.118149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.119445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.120898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.121193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.122957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.124633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.125061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.125483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.125907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.125922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.129695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.131628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.132669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.134132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.134428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.136387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.138272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.138692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.139119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.139549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.139563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.143420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.145202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.146407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.147882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.148176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.149981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.151739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.152167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.152590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.152983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.152997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.156690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.003 [2024-05-15 03:25:59.158442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.159283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.160755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.161053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.162801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.164560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.164991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.165410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.165832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.165846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.169750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.171666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.172687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.174156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.174453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.176353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.178317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.178742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.179169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.179566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.179582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.183307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.185044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.185869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.187346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.187639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.189371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.191116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.191707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.192136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.192578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.192593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.196450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.198185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.199001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.200466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.200761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.202487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.204220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.204746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.205172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.205620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.205634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.209604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.211349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.212188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.213885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.214182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.215934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.217674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.218449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.218872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.219359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.219373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.223292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.225036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.225867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.227453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.227747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.229503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.231168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.231911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.232332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.232798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.232812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.236630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.237862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.239650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.241207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.241507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.242792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.243216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.243635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.266 [2024-05-15 03:25:59.244071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.244528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.244543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.247601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.248886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.250349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.252118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.252413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.254097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.254524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.254949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.255371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.255870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.255885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.258664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.259093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.259516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.259943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.260376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.260807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.261234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.261657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.262082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.262468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.262482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.265440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.265889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.265956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.266383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.266854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.267286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.267708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.268141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.268567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.269084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.269099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.271950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.272392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.272816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.272876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.273370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.273797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.274229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.274654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.275084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.275556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.275570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.278964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.279382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.279400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.281860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.281906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.281946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.281989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.282440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.282487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.282529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.282572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.282614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.283095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.283110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.285557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.285603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.285644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.285685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.286873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.289993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.290470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.290485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.292876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.292922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.292963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.293004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.293435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.267 [2024-05-15 03:25:59.293498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.293541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.293582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.293622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.294095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.294110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.296613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.296657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.296714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.296755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.297885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.300434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.300479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.300519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.300558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.301597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.304863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.305273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.305286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.307794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.307839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.307884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.307927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.308371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.308435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.308477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.308532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.308604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.309058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.309072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.311660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.311705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.311751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.311793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.312811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.315442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.315488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.315529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.315590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.316760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.319748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.319792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.319855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.319909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.320426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.320489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.320544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.320599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.320641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.321027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.321042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.323593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.323640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.323681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.323720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.324786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.327344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.327390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.268 [2024-05-15 03:25:59.327443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.327483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.327901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.327968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.328027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.328068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.328108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.328609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.328624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.331888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.332370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.332385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.334678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.334724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.334769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.334810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.335945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.338443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.338501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.338542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.338582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.339677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.342902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.343307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.343321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.345755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.345801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.345842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.345887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.346355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.346404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.346446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.346506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.346547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.347060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.347075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.349476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.349522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.349567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.349611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.350742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.353964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.354006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.354047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.354518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.354533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.356886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.356932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.356973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.357724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.269 [2024-05-15 03:25:59.358184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.358198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.360965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.361396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.361410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.363919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.363969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.364712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.365111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.365125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.367891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.368180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.368193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.369930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.369981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.370965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.373980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.374020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.374060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.374349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.374363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.376715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.377007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.377021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.379350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.379396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.379436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.379477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.379940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.380000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.380045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.380090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.270 [2024-05-15 03:25:59.380135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.380425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.380438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.382694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.383028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.383043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.384945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.384991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.385414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.385457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.385905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.385964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.386007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.386049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.386090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.386546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.386561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.388282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.388328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.388368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.389882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.390175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.390188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.392536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.392969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.393393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.395147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.395478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.397229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.398972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.400182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.401970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.402323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.402336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.404622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.405054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.405478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.407009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.407344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.409100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.410847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.412271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.413878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.414249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.414263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.416531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.416965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.417390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.418730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.419097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.271 [2024-05-15 03:25:59.420861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.422598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.424201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.425596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.425953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.425967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.428238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.428669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.429098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.430198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.430556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.432450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.434308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.436148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.437299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.437670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.437683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.439895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.440323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.440745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.441550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.441843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.443513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.445258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.446999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.447828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.448122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.448136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.450324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.450752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.451197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.451638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.451935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.453389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.455138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.456878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.457722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.458018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.458032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.460182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.460612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.461038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.461463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.461764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.463233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.464966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.466704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.467965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.468260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.468273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.470373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.470805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.471233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.471656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.472037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.473490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.475170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.476894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.478611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.478982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.478996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.481362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.481792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.482217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.482639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.483080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.484829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.486767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.488709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.490603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.491001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.491015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.493602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.494040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.494463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.494889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.495368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.497284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.498926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.500655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.502399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.532 [2024-05-15 03:25:59.502943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.502957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.505803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.506239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.506663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.507094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.507571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.509239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.510721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.512462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.514198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.514615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.514629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.517743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.518189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.518610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.519035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.519520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.520944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.522393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.524108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.525842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.526183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.526197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.529598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.530034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.530455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.530880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.531363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.532520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.533968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.535708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.537446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.537736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.537750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.541421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.541862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.542286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.542707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.543182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.544175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.545646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.547382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.549113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.549405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.549419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.552956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.553389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.553811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.554238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.554718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.555450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.556907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.558645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.560374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.560667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.560681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.564265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.564904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.565326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.565747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.566228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.566727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.568399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.570287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.572151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.572443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.572456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.576063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.576854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.577285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.577708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.578204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.578631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.580472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.582300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.584173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.584464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.584478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.588001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.588951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.589382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.589806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.590292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.590721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.592660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.594369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.596117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.596409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.596422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.599962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.601053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.601493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.601919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.602402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.602835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.604573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.533 [2024-05-15 03:25:59.606056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.607803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.608099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.608117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.611724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.613017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.613442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.613868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.614336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.614768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.616370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.617826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.619555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.619853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.619867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.623428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.624892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.625320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.625744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.626159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.626592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.627958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.629414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.631155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.631447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.631460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.635025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.636726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.637155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.637577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.637979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.638411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.639520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.640972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.642714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.643009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.643023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.646810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.648685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.649110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.649533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.649923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.650355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.651302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.652758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.654500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.654791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.654805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.658384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.660143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.660571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.660997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.661421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.661857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.662521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.664022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.665763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.666058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.666072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.669614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.671344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.672089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.672513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.672986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.673437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.673864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.675656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.677479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.677771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.677784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.681313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.683057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.684038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.684473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.684940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.685372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.685794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.687605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.689159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.689453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.534 [2024-05-15 03:25:59.689467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.692953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.694703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.695908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.696333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.696799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.697234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.697657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.698811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.700548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.700839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.700856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.704464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.704904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.705327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.705752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.706217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.707289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.708759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.710532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.712279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.712570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.795 [2024-05-15 03:25:59.712583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.716123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.716689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.717122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.717544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.718047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.718479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.718906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.719334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.719756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.720196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.720210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.723120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.723548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.723975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.724398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.724880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.725316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.725738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.726163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.726585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.727025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.727039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.729919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.730350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.730407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.730829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.731296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.731726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.732154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.732576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.733000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.733424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.733437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.736460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.736902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.737327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.737375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.737838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.738270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.738691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.739116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.739545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.739986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.740000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.742652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.742710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.742750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.742791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.743845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.746466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.746513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.746578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.746619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.747731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.750982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.751022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.751485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.751500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.753960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.754700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.755163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.755179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.757637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.757682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.757739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.757792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.758287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.758340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.796 [2024-05-15 03:25:59.758381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.758423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.758463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.758915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.758929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.761524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.761571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.761610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.761650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.762702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.765985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.766428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.766441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.768775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.768819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.768866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.768910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.769396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.769455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.769497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.769538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.769582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.770045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.770060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.772389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.772463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.772504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.772547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.773539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.776845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.777275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.777289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.779587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.779632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.779671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.779711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.780672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.782873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.782919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.782962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.783989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.784007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.786990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.787378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.787393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.789790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.789837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.789882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.797 [2024-05-15 03:25:59.789923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.790910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.793815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.794286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.794301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.796505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.796551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.796595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.796636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.797601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.800757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.800809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.800854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.800896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.801818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.804994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.805728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.806024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.806039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.809588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.809639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.809680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.809721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.810601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.814986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.815026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.815066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.815106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.815500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.815515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.819773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.819825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.819870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.819932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.820914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.825733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.825788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.825828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.825874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.826870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.798 [2024-05-15 03:25:59.831421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.831987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.832027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.832509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.832524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.836824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.836879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.837433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.837478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.837519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:28.799 [2024-05-15 03:25:59.837871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.970730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.979897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.979968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.980359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.980424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.980816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.980873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.981492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.981782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.981796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.981807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.991268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.991701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.992131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.992438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.992451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.995102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.996583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:25:59.998318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.000053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.000763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.001193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.001617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.002046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.002489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.002509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.005973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.006467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.007359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.008591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.010720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.012437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.013225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.013654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.014124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.014141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.018102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.019845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.020871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.022748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.024775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.026513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.027509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.027948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.028410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.028430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.032370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.033998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.035056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.036294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.038432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.040157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.041384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.041807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.042267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.042288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.046296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.048034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.049126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.051086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.059 [2024-05-15 03:26:00.053125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.054863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.055974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.056400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.056870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.056886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.060946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.062686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.063638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.065470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.067460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.069200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.070214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.070651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.071122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.071138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.075005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.076766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.077843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.079800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.081863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.083601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.084688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.085140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.085541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.085555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.088664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.089364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.090087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.091676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.093634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.094986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.096610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.098018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.098308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.098322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.100784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.101214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.102395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.103825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.106041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.107953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.108996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.110462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.110753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.110766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.113213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.113637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.114379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.115823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.117822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.119542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.120351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.121932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.122224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.122237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.124543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.124983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.125403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.127350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.129386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.131093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.132271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.134095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.134433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.134448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.136678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.137108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.137529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.139031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.141075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.142789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.144363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.145777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.146116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.146131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.148287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.148712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.149138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.150116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.152161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.153897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.155613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.156396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.156687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.156700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.158793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.159228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.159648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.160073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.161799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.163532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.165229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.166290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.166582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.166596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.060 [2024-05-15 03:26:00.168660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.169095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.169515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.169560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.170928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.172359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.174089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.175853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.176147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.176161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.179639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.179690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.180190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.180237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.181129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.181175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.181594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.181637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.182056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.182072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.184876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.184932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.186758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.186802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.188801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.188854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.190543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.190588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.190979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.190994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.195649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.195702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.197368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.197415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.199456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.199505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.200303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.200349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.200639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.200652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.202698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.202748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.203172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.203216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.204122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.204168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.205886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.205932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.206283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.206297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.209571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.209623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.211134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.211180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.212752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.212799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.213223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.213266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.213598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.061 [2024-05-15 03:26:00.213612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.217560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.217613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.219323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.219369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.220449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.220498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.221918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.221963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.222253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.222267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.227426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.227484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.228907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.228954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.231038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.231096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.233008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.233065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.233475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.233488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.236416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.236471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.236896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.236944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.237833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.237883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.238302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.238344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.238738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.238755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.241595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.241655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.242086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.242151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.243028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.243077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.243495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.243537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.244015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.244031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.246882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.246933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.247352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.247396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.248280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.248348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.248774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.248820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.249280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.249296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.252166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.252220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.252639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.252686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.253534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.253580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.254010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.254059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.254412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.254427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.257304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.257358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.257780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.257830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.258671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.258718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.259140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.322 [2024-05-15 03:26:00.259184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.259655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.259670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.262461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.262516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.262939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.262983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.263799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.263856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.264276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.264320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.264799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.264815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.267730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.267785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.268208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.268252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.269152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.269211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.269634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.269705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.270113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.270128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.273278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.273331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.273757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.273805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.274692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.274738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.275175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.275219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.275683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.275697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.278659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.278711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.278753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.278788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.279634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.279708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.279762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.280188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.280651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.280668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.283877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.284345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.284360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.286874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.286921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.286962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.287018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.287534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.287578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.287621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.287662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.288129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.288144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.290793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.290840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.290886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.290926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.291436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.291482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.291523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.291563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.291988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.292003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.294491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.294536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.294576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.294620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.295704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.298944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.299004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.299464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.299480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.323 [2024-05-15 03:26:00.301854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.301901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.301941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.301982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.302476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.302538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.302579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.302620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.303095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.303113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.305566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.305612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.305653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.305694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.306748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.309963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.310428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.310443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.312763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.312808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.312853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.312896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.313400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.313445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.313486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.313526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.314004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.314019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.316580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.316647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.316688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.316729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.317834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.320368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.320414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.320454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.320494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.320984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.321030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.321071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.321111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.321520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.321534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.323978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.324023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.324063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.324608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.324653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.324696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:29.324 [2024-05-15 03:26:00.325064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:32.610 00:31:32.610 Latency(us) 00:31:32.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.610 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x0 length 0x100 00:31:32.610 crypto_ram : 6.13 35.57 2.22 0.00 0.00 3323607.99 203723.34 2892072.47 00:31:32.610 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x100 length 0x100 00:31:32.610 crypto_ram : 6.16 36.01 2.25 0.00 0.00 3293695.97 218702.99 2924029.07 00:31:32.610 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x0 length 0x100 00:31:32.610 crypto_ram1 : 6.22 40.38 2.52 0.00 0.00 2953825.26 173764.02 2652397.96 00:31:32.610 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x100 length 0x100 00:31:32.610 crypto_ram1 : 6.24 40.71 2.54 0.00 0.00 2938979.24 179755.89 2700332.86 00:31:32.610 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x0 length 0x100 00:31:32.610 crypto_ram2 : 5.70 222.11 13.88 0.00 0.00 510800.56 2449.80 699050.67 00:31:32.610 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x100 length 0x100 00:31:32.610 crypto_ram2 : 5.70 222.46 13.90 0.00 0.00 507108.09 85883.37 703045.24 00:31:32.610 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x0 length 0x100 00:31:32.610 crypto_ram3 : 5.80 232.08 14.50 0.00 0.00 471722.12 35701.52 591197.14 00:31:32.610 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:32.610 Verification LBA range: start 0x100 length 0x100 00:31:32.610 crypto_ram3 : 5.80 234.36 14.65 0.00 0.00 466754.55 16727.28 571224.26 00:31:32.610 =================================================================================================================== 00:31:32.610 Total : 1063.67 66.48 0.00 0.00 889812.52 2449.80 2924029.07 00:31:32.610 00:31:32.610 real 0m9.427s 00:31:32.610 user 0m17.974s 00:31:32.610 sys 0m0.425s 00:31:32.610 03:26:03 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:32.610 03:26:03 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:32.610 ************************************ 00:31:32.610 END TEST bdev_verify_big_io 00:31:32.610 ************************************ 00:31:32.610 03:26:03 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:32.610 03:26:03 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:31:32.610 03:26:03 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:32.610 03:26:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:32.610 ************************************ 00:31:32.610 START TEST bdev_write_zeroes 00:31:32.610 ************************************ 00:31:32.610 03:26:03 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:32.610 [2024-05-15 03:26:03.666732] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:32.610 [2024-05-15 03:26:03.666784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81568 ] 00:31:32.610 [2024-05-15 03:26:03.763309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.869 [2024-05-15 03:26:03.853994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.869 [2024-05-15 03:26:03.875301] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:31:32.869 [2024-05-15 03:26:03.883322] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:32.869 [2024-05-15 03:26:03.891340] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:32.869 [2024-05-15 03:26:03.994769] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:31:35.407 [2024-05-15 03:26:06.279457] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:31:35.407 [2024-05-15 03:26:06.279524] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:35.407 [2024-05-15 03:26:06.279536] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:35.407 [2024-05-15 03:26:06.287477] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:31:35.407 [2024-05-15 03:26:06.287494] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:35.407 [2024-05-15 03:26:06.287503] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:35.407 [2024-05-15 03:26:06.295498] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:31:35.407 [2024-05-15 03:26:06.295513] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:35.407 [2024-05-15 03:26:06.295521] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:35.407 [2024-05-15 03:26:06.303519] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:31:35.407 [2024-05-15 03:26:06.303533] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:35.407 [2024-05-15 03:26:06.303541] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:35.407 Running I/O for 1 seconds... 00:31:36.344 00:31:36.344 Latency(us) 00:31:36.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.344 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:36.344 crypto_ram : 1.03 1838.23 7.18 0.00 0.00 69125.21 6116.69 83886.08 00:31:36.344 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:36.344 crypto_ram1 : 1.03 1843.77 7.20 0.00 0.00 68490.24 6085.49 77894.22 00:31:36.344 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:36.344 crypto_ram2 : 1.02 14100.86 55.08 0.00 0.00 8928.05 2652.65 11734.06 00:31:36.344 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:36.344 crypto_ram3 : 1.02 14133.42 55.21 0.00 0.00 8876.67 2668.25 9299.87 00:31:36.344 =================================================================================================================== 00:31:36.344 Total : 31916.29 124.67 0.00 0.00 15845.44 2652.65 83886.08 00:31:36.913 00:31:36.913 real 0m4.187s 00:31:36.913 user 0m3.763s 00:31:36.913 sys 0m0.384s 00:31:36.913 03:26:07 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:36.913 03:26:07 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:36.913 ************************************ 00:31:36.913 END TEST bdev_write_zeroes 00:31:36.913 ************************************ 00:31:36.913 03:26:07 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:36.913 03:26:07 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:31:36.913 03:26:07 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:36.913 03:26:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:36.913 ************************************ 00:31:36.913 START TEST bdev_json_nonenclosed 00:31:36.913 ************************************ 00:31:36.913 03:26:07 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:36.913 [2024-05-15 03:26:07.922640] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:36.913 [2024-05-15 03:26:07.922691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82263 ] 00:31:36.913 [2024-05-15 03:26:08.018667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.173 [2024-05-15 03:26:08.109814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.173 [2024-05-15 03:26:08.109881] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:37.173 [2024-05-15 03:26:08.109897] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:37.173 [2024-05-15 03:26:08.109909] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:37.173 00:31:37.173 real 0m0.352s 00:31:37.173 user 0m0.244s 00:31:37.173 sys 0m0.106s 00:31:37.173 03:26:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.173 03:26:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:37.173 ************************************ 00:31:37.173 END TEST bdev_json_nonenclosed 00:31:37.173 ************************************ 00:31:37.173 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:37.173 03:26:08 blockdev_crypto_qat -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:31:37.173 03:26:08 blockdev_crypto_qat -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:37.173 03:26:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:37.173 ************************************ 00:31:37.173 START TEST bdev_json_nonarray 00:31:37.173 ************************************ 00:31:37.173 03:26:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:37.432 [2024-05-15 03:26:08.346313] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:37.432 [2024-05-15 03:26:08.346363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82286 ] 00:31:37.432 [2024-05-15 03:26:08.443538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.432 [2024-05-15 03:26:08.533889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.432 [2024-05-15 03:26:08.533960] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:37.432 [2024-05-15 03:26:08.533977] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:37.432 [2024-05-15 03:26:08.533986] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:37.691 00:31:37.691 real 0m0.353s 00:31:37.691 user 0m0.237s 00:31:37.691 sys 0m0.114s 00:31:37.691 03:26:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.691 03:26:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:37.691 ************************************ 00:31:37.691 END TEST bdev_json_nonarray 00:31:37.691 ************************************ 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:31:37.691 03:26:08 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:31:37.691 00:31:37.691 real 1m12.014s 00:31:37.691 user 2m51.101s 00:31:37.691 sys 0m7.921s 00:31:37.691 03:26:08 blockdev_crypto_qat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.691 03:26:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:31:37.691 ************************************ 00:31:37.691 END TEST blockdev_crypto_qat 00:31:37.691 ************************************ 00:31:37.691 03:26:08 -- spdk/autotest.sh@356 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:31:37.691 03:26:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:37.691 03:26:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:37.691 03:26:08 -- common/autotest_common.sh@10 -- # set +x 00:31:37.691 ************************************ 00:31:37.691 START TEST chaining 00:31:37.691 ************************************ 00:31:37.691 03:26:08 chaining -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:31:37.691 * Looking for test storage... 00:31:37.691 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:37.691 03:26:08 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@7 -- # uname -s 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.691 03:26:08 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:31:37.956 03:26:08 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.956 03:26:08 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.956 03:26:08 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.956 03:26:08 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.956 03:26:08 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.956 03:26:08 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.956 03:26:08 chaining -- paths/export.sh@5 -- # export PATH 00:31:37.956 03:26:08 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@47 -- # : 0 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:31:37.956 03:26:08 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.956 03:26:08 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:37.956 03:26:08 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.956 03:26:08 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.956 03:26:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@296 -- # e810=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@297 -- # x722=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@298 -- # mlx=() 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:44.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:44.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:44.575 Found net devices under 0000:af:00.0: cvl_0_0 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:44.575 Found net devices under 0000:af:00.1: cvl_0_1 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:44.575 03:26:15 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:44.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:31:44.576 00:31:44.576 --- 10.0.0.2 ping statistics --- 00:31:44.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.576 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:31:44.576 00:31:44.576 --- 10.0.0.1 ping statistics --- 00:31:44.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.576 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@422 -- # return 0 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:44.576 03:26:15 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@481 -- # nvmfpid=85906 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@482 -- # waitforlisten 85906 00:31:44.576 03:26:15 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@827 -- # '[' -z 85906 ']' 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:44.576 03:26:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:44.576 [2024-05-15 03:26:15.417507] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:44.576 [2024-05-15 03:26:15.417564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.576 [2024-05-15 03:26:15.514774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.576 [2024-05-15 03:26:15.609685] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.576 [2024-05-15 03:26:15.609727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.576 [2024-05-15 03:26:15.609738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.576 [2024-05-15 03:26:15.609747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.576 [2024-05-15 03:26:15.609755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.576 [2024-05-15 03:26:15.609782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.512 03:26:16 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:45.512 03:26:16 chaining -- common/autotest_common.sh@860 -- # return 0 00:31:45.512 03:26:16 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:45.512 03:26:16 chaining -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.512 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.512 03:26:16 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@69 -- # mktemp 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.tl2CWxxe3G 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@69 -- # mktemp 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.e144OVicNj 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.513 malloc0 00:31:45.513 true 00:31:45.513 true 00:31:45.513 [2024-05-15 03:26:16.443867] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:31:45.513 crypto0 00:31:45.513 [2024-05-15 03:26:16.451893] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:31:45.513 crypto1 00:31:45.513 [2024-05-15 03:26:16.460004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.513 [2024-05-15 03:26:16.475984] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:45.513 [2024-05-15 03:26:16.476213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@85 -- # update_stats 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:45.513 03:26:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:45.513 03:26:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.tl2CWxxe3G bs=1K count=64 00:31:45.772 64+0 records in 00:31:45.772 64+0 records out 00:31:45.772 65536 bytes (66 kB, 64 KiB) copied, 0.000248709 s, 264 MB/s 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.tl2CWxxe3G --ob Nvme0n1 --bs 65536 --count 1 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@25 -- # local config 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:31:45.772 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@31 -- # config='{ 00:31:45.772 "subsystems": [ 00:31:45.772 { 00:31:45.772 "subsystem": "bdev", 00:31:45.772 "config": [ 00:31:45.772 { 00:31:45.772 "method": "bdev_nvme_attach_controller", 00:31:45.772 "params": { 00:31:45.772 "trtype": "tcp", 00:31:45.772 "adrfam": "IPv4", 00:31:45.772 "name": "Nvme0", 00:31:45.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.772 "traddr": "10.0.0.2", 00:31:45.772 "trsvcid": "4420" 00:31:45.772 } 00:31:45.772 }, 00:31:45.772 { 00:31:45.772 "method": "bdev_set_options", 00:31:45.772 "params": { 00:31:45.772 "bdev_auto_examine": false 00:31:45.772 } 00:31:45.772 } 00:31:45.772 ] 00:31:45.772 } 00:31:45.772 ] 00:31:45.772 }' 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.tl2CWxxe3G --ob Nvme0n1 --bs 65536 --count 1 00:31:45.772 03:26:16 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:31:45.772 "subsystems": [ 00:31:45.772 { 00:31:45.772 "subsystem": "bdev", 00:31:45.772 "config": [ 00:31:45.772 { 00:31:45.772 "method": "bdev_nvme_attach_controller", 00:31:45.772 "params": { 00:31:45.772 "trtype": "tcp", 00:31:45.772 "adrfam": "IPv4", 00:31:45.772 "name": "Nvme0", 00:31:45.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.772 "traddr": "10.0.0.2", 00:31:45.772 "trsvcid": "4420" 00:31:45.772 } 00:31:45.772 }, 00:31:45.772 { 00:31:45.772 "method": "bdev_set_options", 00:31:45.773 "params": { 00:31:45.773 "bdev_auto_examine": false 00:31:45.773 } 00:31:45.773 } 00:31:45.773 ] 00:31:45.773 } 00:31:45.773 ] 00:31:45.773 }' 00:31:45.773 [2024-05-15 03:26:16.787090] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:45.773 [2024-05-15 03:26:16.787144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86181 ] 00:31:45.773 [2024-05-15 03:26:16.883997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.032 [2024-05-15 03:26:16.974450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.549  Copying: 64/64 [kB] (average 9142 kBps) 00:31:46.549 00:31:46.549 03:26:17 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@96 -- # update_stats 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:46.550 03:26:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.550 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:46.809 03:26:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.e144OVicNj --ib Nvme0n1 --bs 65536 --count 1 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@25 -- # local config 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:31:46.809 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@31 -- # config='{ 00:31:46.809 "subsystems": [ 00:31:46.809 { 00:31:46.809 "subsystem": "bdev", 00:31:46.809 "config": [ 00:31:46.809 { 00:31:46.809 "method": "bdev_nvme_attach_controller", 00:31:46.809 "params": { 00:31:46.809 "trtype": "tcp", 00:31:46.809 "adrfam": "IPv4", 00:31:46.809 "name": "Nvme0", 00:31:46.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.809 "traddr": "10.0.0.2", 00:31:46.809 "trsvcid": "4420" 00:31:46.809 } 00:31:46.809 }, 00:31:46.809 { 00:31:46.809 "method": "bdev_set_options", 00:31:46.809 "params": { 00:31:46.809 "bdev_auto_examine": false 00:31:46.809 } 00:31:46.809 } 00:31:46.809 ] 00:31:46.809 } 00:31:46.809 ] 00:31:46.809 }' 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.e144OVicNj --ib Nvme0n1 --bs 65536 --count 1 00:31:46.809 03:26:17 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:31:46.809 "subsystems": [ 00:31:46.809 { 00:31:46.809 "subsystem": "bdev", 00:31:46.809 "config": [ 00:31:46.810 { 00:31:46.810 "method": "bdev_nvme_attach_controller", 00:31:46.810 "params": { 00:31:46.810 "trtype": "tcp", 00:31:46.810 "adrfam": "IPv4", 00:31:46.810 "name": "Nvme0", 00:31:46.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.810 "traddr": "10.0.0.2", 00:31:46.810 "trsvcid": "4420" 00:31:46.810 } 00:31:46.810 }, 00:31:46.810 { 00:31:46.810 "method": "bdev_set_options", 00:31:46.810 "params": { 00:31:46.810 "bdev_auto_examine": false 00:31:46.810 } 00:31:46.810 } 00:31:46.810 ] 00:31:46.810 } 00:31:46.810 ] 00:31:46.810 }' 00:31:47.069 [2024-05-15 03:26:17.991968] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:47.069 [2024-05-15 03:26:17.992026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86444 ] 00:31:47.069 [2024-05-15 03:26:18.090917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.069 [2024-05-15 03:26:18.181397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.636  Copying: 64/64 [kB] (average 12 MBps) 00:31:47.636 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:47.636 03:26:18 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:47.636 03:26:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.636 03:26:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:47.636 03:26:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:47.894 03:26:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:47.895 03:26:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.tl2CWxxe3G /tmp/tmp.e144OVicNj 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@25 -- # local config 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:31:47.895 03:26:18 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:31:47.895 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:31:47.895 03:26:19 chaining -- bdev/chaining.sh@31 -- # config='{ 00:31:47.895 "subsystems": [ 00:31:47.895 { 00:31:47.895 "subsystem": "bdev", 00:31:47.895 "config": [ 00:31:47.895 { 00:31:47.895 "method": "bdev_nvme_attach_controller", 00:31:47.895 "params": { 00:31:47.895 "trtype": "tcp", 00:31:47.895 "adrfam": "IPv4", 00:31:47.895 "name": "Nvme0", 00:31:47.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.895 "traddr": "10.0.0.2", 00:31:47.895 "trsvcid": "4420" 00:31:47.895 } 00:31:47.895 }, 00:31:47.895 { 00:31:47.895 "method": "bdev_set_options", 00:31:47.895 "params": { 00:31:47.895 "bdev_auto_examine": false 00:31:47.895 } 00:31:47.895 } 00:31:47.895 ] 00:31:47.895 } 00:31:47.895 ] 00:31:47.895 }' 00:31:47.895 03:26:19 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:31:47.895 03:26:19 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:31:47.895 "subsystems": [ 00:31:47.895 { 00:31:47.895 "subsystem": "bdev", 00:31:47.895 "config": [ 00:31:47.895 { 00:31:47.895 "method": "bdev_nvme_attach_controller", 00:31:47.895 "params": { 00:31:47.895 "trtype": "tcp", 00:31:47.895 "adrfam": "IPv4", 00:31:47.895 "name": "Nvme0", 00:31:47.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.895 "traddr": "10.0.0.2", 00:31:47.895 "trsvcid": "4420" 00:31:47.895 } 00:31:47.895 }, 00:31:47.895 { 00:31:47.895 "method": "bdev_set_options", 00:31:47.895 "params": { 00:31:47.895 "bdev_auto_examine": false 00:31:47.895 } 00:31:47.895 } 00:31:47.895 ] 00:31:47.895 } 00:31:47.895 ] 00:31:47.895 }' 00:31:48.153 [2024-05-15 03:26:19.057532] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:48.153 [2024-05-15 03:26:19.057588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86669 ] 00:31:48.153 [2024-05-15 03:26:19.145250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.153 [2024-05-15 03:26:19.236179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.670  Copying: 64/64 [kB] (average 10 MBps) 00:31:48.670 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@106 -- # update_stats 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:48.670 03:26:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:48.670 03:26:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.tl2CWxxe3G --ob Nvme0n1 --bs 4096 --count 16 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@25 -- # local config 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:31:48.929 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@31 -- # config='{ 00:31:48.929 "subsystems": [ 00:31:48.929 { 00:31:48.929 "subsystem": "bdev", 00:31:48.929 "config": [ 00:31:48.929 { 00:31:48.929 "method": "bdev_nvme_attach_controller", 00:31:48.929 "params": { 00:31:48.929 "trtype": "tcp", 00:31:48.929 "adrfam": "IPv4", 00:31:48.929 "name": "Nvme0", 00:31:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.929 "traddr": "10.0.0.2", 00:31:48.929 "trsvcid": "4420" 00:31:48.929 } 00:31:48.929 }, 00:31:48.929 { 00:31:48.929 "method": "bdev_set_options", 00:31:48.929 "params": { 00:31:48.929 "bdev_auto_examine": false 00:31:48.929 } 00:31:48.929 } 00:31:48.929 ] 00:31:48.929 } 00:31:48.929 ] 00:31:48.929 }' 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.tl2CWxxe3G --ob Nvme0n1 --bs 4096 --count 16 00:31:48.929 03:26:19 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:31:48.929 "subsystems": [ 00:31:48.929 { 00:31:48.929 "subsystem": "bdev", 00:31:48.929 "config": [ 00:31:48.929 { 00:31:48.929 "method": "bdev_nvme_attach_controller", 00:31:48.929 "params": { 00:31:48.929 "trtype": "tcp", 00:31:48.929 "adrfam": "IPv4", 00:31:48.929 "name": "Nvme0", 00:31:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.929 "traddr": "10.0.0.2", 00:31:48.929 "trsvcid": "4420" 00:31:48.929 } 00:31:48.929 }, 00:31:48.929 { 00:31:48.929 "method": "bdev_set_options", 00:31:48.929 "params": { 00:31:48.929 "bdev_auto_examine": false 00:31:48.929 } 00:31:48.929 } 00:31:48.929 ] 00:31:48.929 } 00:31:48.929 ] 00:31:48.929 }' 00:31:48.929 [2024-05-15 03:26:19.960556] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:48.929 [2024-05-15 03:26:19.960611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86729 ] 00:31:48.929 [2024-05-15 03:26:20.061373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.187 [2024-05-15 03:26:20.159409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.704  Copying: 64/64 [kB] (average 9142 kBps) 00:31:49.704 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:49.704 03:26:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.704 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.962 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@114 -- # update_stats 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:49.962 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.962 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:49.962 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.962 03:26:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.963 03:26:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.963 03:26:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:49.963 03:26:21 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.963 03:26:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:49.963 03:26:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@117 -- # : 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.e144OVicNj --ib Nvme0n1 --bs 4096 --count 16 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@25 -- # local config 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:31:49.963 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@31 -- # config='{ 00:31:49.963 "subsystems": [ 00:31:49.963 { 00:31:49.963 "subsystem": "bdev", 00:31:49.963 "config": [ 00:31:49.963 { 00:31:49.963 "method": "bdev_nvme_attach_controller", 00:31:49.963 "params": { 00:31:49.963 "trtype": "tcp", 00:31:49.963 "adrfam": "IPv4", 00:31:49.963 "name": "Nvme0", 00:31:49.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.963 "traddr": "10.0.0.2", 00:31:49.963 "trsvcid": "4420" 00:31:49.963 } 00:31:49.963 }, 00:31:49.963 { 00:31:49.963 "method": "bdev_set_options", 00:31:49.963 "params": { 00:31:49.963 "bdev_auto_examine": false 00:31:49.963 } 00:31:49.963 } 00:31:49.963 ] 00:31:49.963 } 00:31:49.963 ] 00:31:49.963 }' 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.e144OVicNj --ib Nvme0n1 --bs 4096 --count 16 00:31:49.963 03:26:21 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:31:49.963 "subsystems": [ 00:31:49.963 { 00:31:49.963 "subsystem": "bdev", 00:31:49.963 "config": [ 00:31:49.963 { 00:31:49.963 "method": "bdev_nvme_attach_controller", 00:31:49.963 "params": { 00:31:49.963 "trtype": "tcp", 00:31:49.963 "adrfam": "IPv4", 00:31:49.963 "name": "Nvme0", 00:31:49.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.963 "traddr": "10.0.0.2", 00:31:49.963 "trsvcid": "4420" 00:31:49.963 } 00:31:49.963 }, 00:31:49.963 { 00:31:49.963 "method": "bdev_set_options", 00:31:49.963 "params": { 00:31:49.963 "bdev_auto_examine": false 00:31:49.963 } 00:31:49.963 } 00:31:49.963 ] 00:31:49.963 } 00:31:49.963 ] 00:31:49.963 }' 00:31:50.221 [2024-05-15 03:26:21.164352] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:50.222 [2024-05-15 03:26:21.164409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86985 ] 00:31:50.222 [2024-05-15 03:26:21.262477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.222 [2024-05-15 03:26:21.352352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.048  Copying: 64/64 [kB] (average 492 kBps) 00:31:51.048 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # opcode= 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:31:51.048 03:26:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:51.048 03:26:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.tl2CWxxe3G /tmp/tmp.e144OVicNj 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.tl2CWxxe3G /tmp/tmp.e144OVicNj 00:31:51.308 03:26:22 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@117 -- # sync 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@120 -- # set +e 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.308 rmmod nvme_tcp 00:31:51.308 rmmod nvme_fabrics 00:31:51.308 rmmod nvme_keyring 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@124 -- # set -e 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@125 -- # return 0 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@489 -- # '[' -n 85906 ']' 00:31:51.308 03:26:22 chaining -- nvmf/common.sh@490 -- # killprocess 85906 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@946 -- # '[' -z 85906 ']' 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@950 -- # kill -0 85906 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@951 -- # uname 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85906 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85906' 00:31:51.308 killing process with pid 85906 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@965 -- # kill 85906 00:31:51.308 [2024-05-15 03:26:22.362551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:51.308 03:26:22 chaining -- common/autotest_common.sh@970 -- # wait 85906 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.567 03:26:22 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.567 03:26:22 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.567 03:26:22 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.099 03:26:24 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.099 03:26:24 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:31:54.099 03:26:24 chaining -- bdev/chaining.sh@132 -- # bperfpid=87627 00:31:54.099 03:26:24 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:31:54.099 03:26:24 chaining -- bdev/chaining.sh@134 -- # waitforlisten 87627 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@827 -- # '[' -z 87627 ']' 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:54.099 03:26:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:54.099 [2024-05-15 03:26:24.715506] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:31:54.099 [2024-05-15 03:26:24.715563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87627 ] 00:31:54.099 [2024-05-15 03:26:24.813969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.099 [2024-05-15 03:26:24.912623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.667 03:26:25 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:54.667 03:26:25 chaining -- common/autotest_common.sh@860 -- # return 0 00:31:54.667 03:26:25 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:31:54.667 03:26:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.667 03:26:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:31:54.667 malloc0 00:31:54.667 true 00:31:54.667 true 00:31:54.667 [2024-05-15 03:26:25.726527] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:31:54.667 crypto0 00:31:54.667 [2024-05-15 03:26:25.734556] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:31:54.667 crypto1 00:31:54.667 03:26:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.667 03:26:25 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:54.925 Running I/O for 5 seconds... 00:32:00.197 00:32:00.197 Latency(us) 00:32:00.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.197 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:32:00.197 Verification LBA range: start 0x0 length 0x2000 00:32:00.197 crypto1 : 5.01 10385.05 40.57 0.00 0.00 24568.96 2481.01 16602.45 00:32:00.197 =================================================================================================================== 00:32:00.197 Total : 10385.05 40.57 0.00 0.00 24568.96 2481.01 16602.45 00:32:00.197 0 00:32:00.197 03:26:30 chaining -- bdev/chaining.sh@146 -- # killprocess 87627 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@946 -- # '[' -z 87627 ']' 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@950 -- # kill -0 87627 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@951 -- # uname 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87627 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87627' 00:32:00.197 killing process with pid 87627 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@965 -- # kill 87627 00:32:00.197 Received shutdown signal, test time was about 5.000000 seconds 00:32:00.197 00:32:00.197 Latency(us) 00:32:00.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.197 =================================================================================================================== 00:32:00.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.197 03:26:30 chaining -- common/autotest_common.sh@970 -- # wait 87627 00:32:00.197 03:26:31 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:32:00.197 03:26:31 chaining -- bdev/chaining.sh@152 -- # bperfpid=88628 00:32:00.197 03:26:31 chaining -- bdev/chaining.sh@154 -- # waitforlisten 88628 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@827 -- # '[' -z 88628 ']' 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:00.197 03:26:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:00.197 [2024-05-15 03:26:31.190246] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:32:00.197 [2024-05-15 03:26:31.190304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88628 ] 00:32:00.197 [2024-05-15 03:26:31.278843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.456 [2024-05-15 03:26:31.372582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.456 03:26:31 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:00.456 03:26:31 chaining -- common/autotest_common.sh@860 -- # return 0 00:32:00.456 03:26:31 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:32:00.456 03:26:31 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.456 03:26:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:00.456 malloc0 00:32:00.456 true 00:32:00.456 true 00:32:00.456 [2024-05-15 03:26:31.535521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:32:00.456 [2024-05-15 03:26:31.535568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.456 [2024-05-15 03:26:31.535585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a8b00 00:32:00.456 [2024-05-15 03:26:31.535594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.456 [2024-05-15 03:26:31.536712] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.456 [2024-05-15 03:26:31.536734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:32:00.456 pt0 00:32:00.456 [2024-05-15 03:26:31.543551] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:32:00.456 crypto0 00:32:00.456 [2024-05-15 03:26:31.551573] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:32:00.456 crypto1 00:32:00.456 03:26:31 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.456 03:26:31 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:00.714 Running I/O for 5 seconds... 00:32:06.055 00:32:06.055 Latency(us) 00:32:06.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.055 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:32:06.055 Verification LBA range: start 0x0 length 0x2000 00:32:06.055 crypto1 : 5.02 8265.48 32.29 0.00 0.00 30882.96 7271.38 18599.74 00:32:06.055 =================================================================================================================== 00:32:06.055 Total : 8265.48 32.29 0.00 0.00 30882.96 7271.38 18599.74 00:32:06.055 0 00:32:06.055 03:26:36 chaining -- bdev/chaining.sh@167 -- # killprocess 88628 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@946 -- # '[' -z 88628 ']' 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@950 -- # kill -0 88628 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@951 -- # uname 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88628 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88628' 00:32:06.055 killing process with pid 88628 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@965 -- # kill 88628 00:32:06.055 Received shutdown signal, test time was about 5.000000 seconds 00:32:06.055 00:32:06.055 Latency(us) 00:32:06.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.055 =================================================================================================================== 00:32:06.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@970 -- # wait 88628 00:32:06.055 03:26:36 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:32:06.055 03:26:36 chaining -- bdev/chaining.sh@170 -- # killprocess 88628 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@946 -- # '[' -z 88628 ']' 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@950 -- # kill -0 88628 00:32:06.055 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (88628) - No such process 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@973 -- # echo 'Process with pid 88628 is not found' 00:32:06.055 Process with pid 88628 is not found 00:32:06.055 03:26:36 chaining -- bdev/chaining.sh@171 -- # wait 88628 00:32:06.055 03:26:36 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:32:06.055 03:26:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@296 -- # e810=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@297 -- # x722=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@298 -- # mlx=() 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.055 03:26:36 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:06.056 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:06.056 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:06.056 Found net devices under 0000:af:00.0: cvl_0_0 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:06.056 Found net devices under 0000:af:00.1: cvl_0_1 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.056 03:26:36 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:06.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:32:06.056 00:32:06.056 --- 10.0.0.2 ping statistics --- 00:32:06.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.056 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:06.056 03:26:37 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:32:06.316 00:32:06.316 --- 10.0.0.1 ping statistics --- 00:32:06.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.316 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@422 -- # return 0 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.316 03:26:37 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@481 -- # nvmfpid=89572 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@482 -- # waitforlisten 89572 00:32:06.316 03:26:37 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@827 -- # '[' -z 89572 ']' 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:06.316 03:26:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:06.316 [2024-05-15 03:26:37.294573] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:32:06.316 [2024-05-15 03:26:37.294628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.316 [2024-05-15 03:26:37.392993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.575 [2024-05-15 03:26:37.488667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.575 [2024-05-15 03:26:37.488706] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.575 [2024-05-15 03:26:37.488717] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.575 [2024-05-15 03:26:37.488727] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.575 [2024-05-15 03:26:37.488734] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.575 [2024-05-15 03:26:37.488761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@860 -- # return 0 00:32:07.144 03:26:38 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:07.144 03:26:38 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.144 03:26:38 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.144 03:26:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:07.144 malloc0 00:32:07.144 [2024-05-15 03:26:38.288660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.403 [2024-05-15 03:26:38.304633] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:07.403 [2024-05-15 03:26:38.304883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.403 03:26:38 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:32:07.403 03:26:38 chaining -- bdev/chaining.sh@189 -- # bperfpid=89817 00:32:07.403 03:26:38 chaining -- bdev/chaining.sh@191 -- # waitforlisten 89817 /var/tmp/bperf.sock 00:32:07.403 03:26:38 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@827 -- # '[' -z 89817 ']' 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:07.403 03:26:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:07.403 [2024-05-15 03:26:38.372221] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:32:07.403 [2024-05-15 03:26:38.372278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89817 ] 00:32:07.403 [2024-05-15 03:26:38.469789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.662 [2024-05-15 03:26:38.564029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.230 03:26:39 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:08.230 03:26:39 chaining -- common/autotest_common.sh@860 -- # return 0 00:32:08.230 03:26:39 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:32:08.230 03:26:39 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:32:08.489 [2024-05-15 03:26:39.545157] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:32:08.489 nvme0n1 00:32:08.489 true 00:32:08.489 crypto0 00:32:08.489 03:26:39 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:08.489 Running I/O for 5 seconds... 00:32:13.763 00:32:13.763 Latency(us) 00:32:13.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.763 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:32:13.763 Verification LBA range: start 0x0 length 0x2000 00:32:13.763 crypto0 : 5.02 7760.56 30.31 0.00 0.00 32871.79 3510.86 26963.38 00:32:13.763 =================================================================================================================== 00:32:13.763 Total : 7760.56 30.31 0.00 0.00 32871.79 3510.86 26963.38 00:32:13.763 0 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@39 -- # opcode= 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:32:13.763 03:26:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@205 -- # sequence=77992 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:32:14.022 03:26:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@206 -- # encrypt=38996 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:32:14.281 03:26:45 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@207 -- # decrypt=38996 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:32:14.540 03:26:45 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.799 03:26:45 chaining -- bdev/chaining.sh@208 -- # crc32c=77992 00:32:14.799 03:26:45 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:32:14.799 03:26:45 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:32:14.799 03:26:45 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:32:14.799 03:26:45 chaining -- bdev/chaining.sh@214 -- # killprocess 89817 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@946 -- # '[' -z 89817 ']' 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@950 -- # kill -0 89817 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@951 -- # uname 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89817 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89817' 00:32:14.799 killing process with pid 89817 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@965 -- # kill 89817 00:32:14.799 Received shutdown signal, test time was about 5.000000 seconds 00:32:14.799 00:32:14.799 Latency(us) 00:32:14.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.799 =================================================================================================================== 00:32:14.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.799 03:26:45 chaining -- common/autotest_common.sh@970 -- # wait 89817 00:32:15.058 03:26:46 chaining -- bdev/chaining.sh@219 -- # bperfpid=90974 00:32:15.058 03:26:46 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:32:15.058 03:26:46 chaining -- bdev/chaining.sh@221 -- # waitforlisten 90974 /var/tmp/bperf.sock 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@827 -- # '[' -z 90974 ']' 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:15.058 03:26:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:15.058 [2024-05-15 03:26:46.060155] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:32:15.058 [2024-05-15 03:26:46.060213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90974 ] 00:32:15.058 [2024-05-15 03:26:46.159158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.316 [2024-05-15 03:26:46.254631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.884 03:26:47 chaining -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:15.884 03:26:47 chaining -- common/autotest_common.sh@860 -- # return 0 00:32:15.884 03:26:47 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:32:15.884 03:26:47 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:32:16.452 [2024-05-15 03:26:47.427718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:32:16.452 nvme0n1 00:32:16.452 true 00:32:16.452 crypto0 00:32:16.452 03:26:47 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.452 Running I/O for 5 seconds... 00:32:21.735 00:32:21.735 Latency(us) 00:32:21.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.735 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:21.735 Verification LBA range: start 0x0 length 0x200 00:32:21.735 crypto0 : 5.01 1601.05 100.07 0.00 0.00 19573.12 1419.95 20846.69 00:32:21.735 =================================================================================================================== 00:32:21.735 Total : 1601.05 100.07 0.00 0.00 19573.12 1419.95 20846.69 00:32:21.735 0 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # opcode= 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@233 -- # sequence=16034 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:32:21.735 03:26:52 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@234 -- # encrypt=8017 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:32:21.993 03:26:53 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@235 -- # decrypt=8017 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:32:22.250 03:26:53 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:22.507 03:26:53 chaining -- bdev/chaining.sh@236 -- # crc32c=16034 00:32:22.507 03:26:53 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:32:22.507 03:26:53 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:32:22.507 03:26:53 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:32:22.507 03:26:53 chaining -- bdev/chaining.sh@242 -- # killprocess 90974 00:32:22.507 03:26:53 chaining -- common/autotest_common.sh@946 -- # '[' -z 90974 ']' 00:32:22.507 03:26:53 chaining -- common/autotest_common.sh@950 -- # kill -0 90974 00:32:22.507 03:26:53 chaining -- common/autotest_common.sh@951 -- # uname 00:32:22.507 03:26:53 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:22.507 03:26:53 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90974 00:32:22.765 03:26:53 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:22.765 03:26:53 chaining -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:22.765 03:26:53 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90974' 00:32:22.765 killing process with pid 90974 00:32:22.765 03:26:53 chaining -- common/autotest_common.sh@965 -- # kill 90974 00:32:22.765 Received shutdown signal, test time was about 5.000000 seconds 00:32:22.765 00:32:22.765 Latency(us) 00:32:22.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.765 =================================================================================================================== 00:32:22.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.765 03:26:53 chaining -- common/autotest_common.sh@970 -- # wait 90974 00:32:22.765 03:26:53 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@117 -- # sync 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@120 -- # set +e 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.765 03:26:53 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:22.765 rmmod nvme_tcp 00:32:23.026 rmmod nvme_fabrics 00:32:23.026 rmmod nvme_keyring 00:32:23.026 03:26:53 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.026 03:26:53 chaining -- nvmf/common.sh@124 -- # set -e 00:32:23.026 03:26:53 chaining -- nvmf/common.sh@125 -- # return 0 00:32:23.026 03:26:53 chaining -- nvmf/common.sh@489 -- # '[' -n 89572 ']' 00:32:23.026 03:26:53 chaining -- nvmf/common.sh@490 -- # killprocess 89572 00:32:23.026 03:26:53 chaining -- common/autotest_common.sh@946 -- # '[' -z 89572 ']' 00:32:23.026 03:26:53 chaining -- common/autotest_common.sh@950 -- # kill -0 89572 00:32:23.026 03:26:53 chaining -- common/autotest_common.sh@951 -- # uname 00:32:23.026 03:26:53 chaining -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:23.026 03:26:53 chaining -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89572 00:32:23.026 03:26:54 chaining -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:23.026 03:26:54 chaining -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:23.026 03:26:54 chaining -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89572' 00:32:23.026 killing process with pid 89572 00:32:23.026 03:26:54 chaining -- common/autotest_common.sh@965 -- # kill 89572 00:32:23.027 [2024-05-15 03:26:54.019469] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:23.027 03:26:54 chaining -- common/autotest_common.sh@970 -- # wait 89572 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:23.286 03:26:54 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.286 03:26:54 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.286 03:26:54 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.189 03:26:56 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:25.189 03:26:56 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:32:25.189 00:32:25.189 real 0m47.570s 00:32:25.189 user 1m0.619s 00:32:25.189 sys 0m10.147s 00:32:25.189 03:26:56 chaining -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:25.189 03:26:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:32:25.189 ************************************ 00:32:25.189 END TEST chaining 00:32:25.189 ************************************ 00:32:25.446 03:26:56 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:32:25.446 03:26:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:25.446 03:26:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:25.446 03:26:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:25.446 03:26:56 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:32:25.446 03:26:56 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:32:25.446 03:26:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:25.446 03:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:25.446 03:26:56 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:32:25.446 03:26:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:32:25.446 03:26:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:32:25.446 03:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:29.696 INFO: APP EXITING 00:32:29.696 INFO: killing all VMs 00:32:29.696 INFO: killing vhost app 00:32:29.696 INFO: EXIT DONE 00:32:32.228 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:32.796 Waiting for block devices as requested 00:32:32.796 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:32.796 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:32.796 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:33.054 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:33.054 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:33.054 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:33.313 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:33.313 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:33.313 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:33.313 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:33.572 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:33.572 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:33.572 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:33.831 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:33.831 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:33.831 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:33.831 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:37.116 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:37.683 Cleaning 00:32:37.683 Removing: /var/run/dpdk/spdk0/config 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:37.683 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:37.683 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:37.683 Removing: /dev/shm/nvmf_trace.0 00:32:37.683 Removing: /dev/shm/spdk_tgt_trace.pid3993220 00:32:37.683 Removing: /var/run/dpdk/spdk0 00:32:37.683 Removing: /var/run/dpdk/spdk_pid10333 00:32:37.683 Removing: /var/run/dpdk/spdk_pid12882 00:32:37.683 Removing: /var/run/dpdk/spdk_pid20810 00:32:37.683 Removing: /var/run/dpdk/spdk_pid23369 00:32:37.683 Removing: /var/run/dpdk/spdk_pid2464 00:32:37.683 Removing: /var/run/dpdk/spdk_pid28707 00:32:37.683 Removing: /var/run/dpdk/spdk_pid29087 00:32:37.683 Removing: /var/run/dpdk/spdk_pid29520 00:32:37.683 Removing: /var/run/dpdk/spdk_pid30076 00:32:37.683 Removing: /var/run/dpdk/spdk_pid30664 00:32:37.683 Removing: /var/run/dpdk/spdk_pid31927 00:32:37.683 Removing: /var/run/dpdk/spdk_pid32826 00:32:37.683 Removing: /var/run/dpdk/spdk_pid33141 00:32:37.683 Removing: /var/run/dpdk/spdk_pid35204 00:32:37.683 Removing: /var/run/dpdk/spdk_pid37258 00:32:37.683 Removing: /var/run/dpdk/spdk_pid39202 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3989507 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3992034 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3993220 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3993970 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3995242 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3995615 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3996584 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3996716 00:32:37.683 Removing: /var/run/dpdk/spdk_pid3996944 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4000041 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4001970 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4002245 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4002532 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4002859 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4003338 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4003590 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4003833 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4004110 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4004839 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4008027 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4008273 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4008557 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4008830 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4009072 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4009134 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4009387 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4009710 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4010047 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4010337 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4010587 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4010832 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4011082 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4011327 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4011577 00:32:37.683 Removing: /var/run/dpdk/spdk_pid4011863 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4012211 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4012528 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4012781 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4013024 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4013276 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4013521 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4013774 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4014025 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4014337 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4014676 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4014981 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4015233 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4015694 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4015952 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4016416 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4016675 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4017137 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4017395 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4017671 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4017983 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4018549 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4018823 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4019054 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4023772 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4025758 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4028422 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4029574 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4030941 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4031206 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4031348 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4031467 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4036172 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4036819 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4037959 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4038268 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4044800 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4050766 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4056886 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4070913 00:32:37.942 Removing: /var/run/dpdk/spdk_pid40799 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4084440 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4098456 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4114900 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4130571 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4145803 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4150779 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4154619 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4161563 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4164642 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4170510 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4174802 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4181616 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4185034 00:32:37.942 Removing: /var/run/dpdk/spdk_pid4193504 00:32:37.942 Removing: /var/run/dpdk/spdk_pid42824 00:32:37.942 Removing: /var/run/dpdk/spdk_pid44881 00:32:37.942 Removing: /var/run/dpdk/spdk_pid46939 00:32:37.942 Removing: /var/run/dpdk/spdk_pid48591 00:32:37.942 Removing: /var/run/dpdk/spdk_pid49299 00:32:37.942 Removing: /var/run/dpdk/spdk_pid49778 00:32:37.942 Removing: /var/run/dpdk/spdk_pid52287 00:32:37.942 Removing: /var/run/dpdk/spdk_pid54681 00:32:37.942 Removing: /var/run/dpdk/spdk_pid56850 00:32:37.942 Removing: /var/run/dpdk/spdk_pid58223 00:32:37.942 Removing: /var/run/dpdk/spdk_pid59707 00:32:37.942 Removing: /var/run/dpdk/spdk_pid60510 00:32:37.942 Removing: /var/run/dpdk/spdk_pid60659 00:32:37.942 Removing: /var/run/dpdk/spdk_pid60727 00:32:38.201 Removing: /var/run/dpdk/spdk_pid61153 00:32:38.201 Removing: /var/run/dpdk/spdk_pid61222 00:32:38.201 Removing: /var/run/dpdk/spdk_pid63071 00:32:38.201 Removing: /var/run/dpdk/spdk_pid65056 00:32:38.201 Removing: /var/run/dpdk/spdk_pid67005 00:32:38.201 Removing: /var/run/dpdk/spdk_pid67956 00:32:38.201 Removing: /var/run/dpdk/spdk_pid68885 00:32:38.201 Removing: /var/run/dpdk/spdk_pid69131 00:32:38.201 Removing: /var/run/dpdk/spdk_pid69222 00:32:38.201 Removing: /var/run/dpdk/spdk_pid69391 00:32:38.201 Removing: /var/run/dpdk/spdk_pid70403 00:32:38.201 Removing: /var/run/dpdk/spdk_pid71081 00:32:38.201 Removing: /var/run/dpdk/spdk_pid71721 00:32:38.201 Removing: /var/run/dpdk/spdk_pid74171 00:32:38.201 Removing: /var/run/dpdk/spdk_pid76446 00:32:38.201 Removing: /var/run/dpdk/spdk_pid78719 00:32:38.201 Removing: /var/run/dpdk/spdk_pid79977 00:32:38.201 Removing: /var/run/dpdk/spdk_pid81568 00:32:38.201 Removing: /var/run/dpdk/spdk_pid82263 00:32:38.201 Removing: /var/run/dpdk/spdk_pid82286 00:32:38.201 Removing: /var/run/dpdk/spdk_pid86181 00:32:38.201 Removing: /var/run/dpdk/spdk_pid86444 00:32:38.201 Removing: /var/run/dpdk/spdk_pid86669 00:32:38.201 Removing: /var/run/dpdk/spdk_pid86729 00:32:38.201 Removing: /var/run/dpdk/spdk_pid86985 00:32:38.201 Removing: /var/run/dpdk/spdk_pid87627 00:32:38.201 Removing: /var/run/dpdk/spdk_pid88628 00:32:38.201 Removing: /var/run/dpdk/spdk_pid89817 00:32:38.201 Removing: /var/run/dpdk/spdk_pid90974 00:32:38.201 Clean 00:32:38.201 03:27:09 -- common/autotest_common.sh@1447 -- # return 0 00:32:38.201 03:27:09 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:32:38.201 03:27:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.201 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:38.201 03:27:09 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:32:38.201 03:27:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.201 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 03:27:09 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:32:38.460 03:27:09 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:32:38.460 03:27:09 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:32:38.460 03:27:09 -- spdk/autotest.sh@387 -- # hash lcov 00:32:38.460 03:27:09 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:38.460 03:27:09 -- spdk/autotest.sh@389 -- # hostname 00:32:38.460 03:27:09 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:32:38.460 geninfo: WARNING: invalid characters removed from testname! 00:33:10.544 03:27:36 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:10.544 03:27:40 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:13.079 03:27:43 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:15.615 03:27:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:18.903 03:27:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:21.438 03:27:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:33:23.972 03:27:54 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:23.972 03:27:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:23.972 03:27:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:23.972 03:27:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.972 03:27:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.972 03:27:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.972 03:27:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.972 03:27:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.972 03:27:54 -- paths/export.sh@5 -- $ export PATH 00:33:23.972 03:27:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.972 03:27:54 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:23.972 03:27:54 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:23.972 03:27:55 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715736475.XXXXXX 00:33:23.972 03:27:55 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715736475.42dQts 00:33:23.972 03:27:55 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:23.972 03:27:55 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:23.972 03:27:55 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:33:23.972 03:27:55 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:23.972 03:27:55 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:23.972 03:27:55 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:23.972 03:27:55 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:33:23.972 03:27:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:23.972 03:27:55 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:33:23.972 03:27:55 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:23.972 03:27:55 -- pm/common@17 -- $ local monitor 00:33:23.972 03:27:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.972 03:27:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.972 03:27:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.972 03:27:55 -- pm/common@21 -- $ date +%s 00:33:23.972 03:27:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.972 03:27:55 -- pm/common@21 -- $ date +%s 00:33:23.972 03:27:55 -- pm/common@25 -- $ sleep 1 00:33:23.972 03:27:55 -- pm/common@21 -- $ date +%s 00:33:23.972 03:27:55 -- pm/common@21 -- $ date +%s 00:33:23.972 03:27:55 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736475 00:33:23.972 03:27:55 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736475 00:33:23.972 03:27:55 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736475 00:33:23.972 03:27:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736475 00:33:23.972 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736475_collect-vmstat.pm.log 00:33:23.972 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736475_collect-cpu-load.pm.log 00:33:23.972 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736475_collect-cpu-temp.pm.log 00:33:23.972 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736475_collect-bmc-pm.bmc.pm.log 00:33:24.908 03:27:56 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:24.908 03:27:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:33:24.908 03:27:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:24.908 03:27:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:24.908 03:27:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:24.908 03:27:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:24.908 03:27:56 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:24.908 03:27:56 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:24.908 03:27:56 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:33:25.167 03:27:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:25.167 03:27:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:25.167 03:27:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:25.167 03:27:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:25.167 03:27:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.167 03:27:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:25.167 03:27:56 -- pm/common@44 -- $ pid=103358 00:33:25.167 03:27:56 -- pm/common@50 -- $ kill -TERM 103358 00:33:25.168 03:27:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.168 03:27:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:25.168 03:27:56 -- pm/common@44 -- $ pid=103359 00:33:25.168 03:27:56 -- pm/common@50 -- $ kill -TERM 103359 00:33:25.168 03:27:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.168 03:27:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:25.168 03:27:56 -- pm/common@44 -- $ pid=103361 00:33:25.168 03:27:56 -- pm/common@50 -- $ kill -TERM 103361 00:33:25.168 03:27:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.168 03:27:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:25.168 03:27:56 -- pm/common@44 -- $ pid=103386 00:33:25.168 03:27:56 -- pm/common@50 -- $ sudo -E kill -TERM 103386 00:33:25.168 + [[ -n 3868268 ]] 00:33:25.168 + sudo kill 3868268 00:33:25.178 [Pipeline] } 00:33:25.201 [Pipeline] // stage 00:33:25.207 [Pipeline] } 00:33:25.225 [Pipeline] // timeout 00:33:25.231 [Pipeline] } 00:33:25.246 [Pipeline] // catchError 00:33:25.251 [Pipeline] } 00:33:25.268 [Pipeline] // wrap 00:33:25.277 [Pipeline] } 00:33:25.292 [Pipeline] // catchError 00:33:25.303 [Pipeline] stage 00:33:25.305 [Pipeline] { (Epilogue) 00:33:25.321 [Pipeline] catchError 00:33:25.323 [Pipeline] { 00:33:25.340 [Pipeline] echo 00:33:25.342 Cleanup processes 00:33:25.349 [Pipeline] sh 00:33:25.637 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.637 103472 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:33:25.637 103757 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.652 [Pipeline] sh 00:33:25.935 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.935 ++ grep -v 'sudo pgrep' 00:33:25.935 ++ awk '{print $1}' 00:33:25.935 + sudo kill -9 103472 00:33:25.947 [Pipeline] sh 00:33:26.228 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:41.156 [Pipeline] sh 00:33:41.441 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:41.441 Artifacts sizes are good 00:33:41.457 [Pipeline] archiveArtifacts 00:33:41.465 Archiving artifacts 00:33:41.640 [Pipeline] sh 00:33:41.927 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:33:41.943 [Pipeline] cleanWs 00:33:41.954 [WS-CLEANUP] Deleting project workspace... 00:33:41.954 [WS-CLEANUP] Deferred wipeout is used... 00:33:41.961 [WS-CLEANUP] done 00:33:41.964 [Pipeline] } 00:33:41.986 [Pipeline] // catchError 00:33:42.001 [Pipeline] sh 00:33:42.287 + logger -p user.info -t JENKINS-CI 00:33:42.297 [Pipeline] } 00:33:42.316 [Pipeline] // stage 00:33:42.323 [Pipeline] } 00:33:42.342 [Pipeline] // node 00:33:42.348 [Pipeline] End of Pipeline 00:33:42.383 Finished: SUCCESS